Overview

While there are many benefits of making and makerspaces, one of the greatest challenges of implementing making in K–12 schools is the question of how to assess collaborative, cross-disciplinary, and iterative making practices and outcomes. Many existing assessment tools for maker-centered classrooms focus on the final project or a portfolio. The Beyond Rubrics Toolkit has embedded tools to capture evidence of the process of making. This toolkit is our first attempt at capturing qualitative and quantitative evidence during the process of making. We invite you to try out the tools in your classroom or makerspace–adapt, modify, and remix what we have–and give us your feedback. We look forward to seeing your Maker Elements in action!

Background:

Over the past 5 years, maker-centered learning has grown rapidly through grassroots efforts and an open community of practice. From its roots in out-of-school-time environments, it has bridged to formal classroom environments with high potential for fundamentally changing the boundaries, structures, and expectations of traditional, content-driven schooling. Many believe maker-centered learning is a compelling approach that can spark and sustain student interest and develop critical skills and dispositions (Sheridan, Halverson, Litts, Brahms, Jacobs-Priebe, & Owens, 2014; Martin, 2015). For maker education to take root in sustainable ways, a number of factors must be considered, from curriculum integration to teacher professional development, from equitable facilitation strategies to administrator buy-in, from capacity building to assessment methods.

Specifically, big questions around assessment and evidence of learning exist in the field today. Traditional summative assessment strategies do not capture or assess the open-ended, collaborative, cross-disciplinary, iterative, and dynamic nature of maker-centered learning or projects. Teachers urgently need tailor-made assessment resources that closely align with content standards and real world skills (Remold, Fusco, Anderson, & Leones, 2016).

In response, the MIT Playful Journey Lab and Maker Ed designed Beyond Rubrics: Moving Towards Embedded Assessment in Maker Education to investigate and co-design embedded assessment tools and strategies for maker learning environments, where rigorous evidence of process-oriented, social, and exploratory student learning can be observed and collected in real time, without constraining or interrupting the rich, complex, and iterative learning occurring in maker education.

The Beyond Rubrics project took place between late 2017 and mid-2019. The team intentionally chose to work with two U.S. middle schools on opposite coasts with differing approaches to and implementations of maker-centered learning. The two schools are quite distinct from one another, allowing us to gain broader perspectives and intimate understandings around the assessment approaches that teachers are navigating. Because co-design and co-creation was a critical foundation of the project team’s philosophy, we conducted in-person workshops, three with each school partner, for 1-2 days each, throughout the course of the project (1.5 academic years). The workshops built on one another, and each workshop allowed for practitioner-researcher collaboration, testing and prototyping of assessment tools integrated with curriculum, open dialogue about the utility and ease of assessment approaches as well as the necessary constraints of systems, active planning and reflection time, and general hands-on support. Between workshops, site visits and observations occurred regularly, with consistent documentation supporting the overall work. Through the research process, though our original focus was on designing tools for evidence collection, our scope shifted to include additional pieces for teachers, students, and stakeholders to develop shared understanding around assessment approaches as a whole.

Back to Top ↑

Maker Assessment Design Principles:

Our exploration of assessment for making is inspired by an assessment approach called embedded assessment.

Embedded assessment is a form of assessment that is directly woven into the learning environment and activities, so student learning can be monitored and supported in real-time without interrupting the flow of learning (Shute, Ventura, Bauer, & Zapata-Rivera, 2009; Wilson & Sloane, 2000). Embedded assessment has been widely adopted in digital learning environments such as simulations and video games to design tasks within a system that can elicit evidence of desired outcomes, and to automatically and rapidly capture and process rich data generated in the process of performance (Kim & Shute, 2015). With well-designed embedded assessments in place, students’ actions within a learning environment provide robust evidence for underlying competencies while the distinction between assessment and learning is blurred.

Based on literature and conversations with the teachers from the two partnering middle schools, we have established five design principles of maker assessment to guide our initial conception of assessment tools for making (Murai et al, 2019):

  • Assessment in making should be construct-driven.
  • Maker assessment should involve students as active participants in the assessment process.
  • Maker assessment should be evidence-centered, generating visible, tangible, and varied forms of evidence for the underlying constructs.
  • Maker assessments should be seamlessly woven into the culture of the classroom and learning environment.
  • Maker assessment should be fun and inviting for all learners.

We also believe that assessment, writ large, should be multidimensional, ongoing, performance-based, based in values of the learners and teachers, flexible, embedded, and playful! Overall, these approaches may also help shift preconceived notions that assessment is a separate and summative activity.

Back to Top ↑

The toolkit was created by the MIT Playful Journey Lab and Maker Ed in collaboration with Albemarle County School District, Portola Valley School District, and San Mateo County Office of Education. This project is generously supported by the National Science Foundation, Grant #1723459. It is released under Creative Commons License BY-NC-SA. 

References:

Kim, Y. J., & Shute, V. J. (2015). The interplay of game elements with psychometric qualities, learning, and enjoyment in game-based assessment. Computers & Education, 87, 340-356.

Murai, Y., Kim, Y., Martin, E., Kirschmann, P., Rosenheck, L., & Reich, J. (forthcoming). Embedding assessment in school-based making: Preliminary explorations of principles for embedded assessment in maker learning. In Blikstein, P & Holbert, N (Eds.), FabLearn ’19: Proceedings of the 8th Annual Conference on Creativity and Fabrication in Education. New York, NY: ACM. Retrieved from https://drive.google.com/file/d/1_-P8_ByhY73oATW8Myrd9-xDD8icFJCC/vie.

Remold, J., Fusco, J., Anderson, K., & Leones,T. (2016). Communities for maker educators: A study of the communities and resources that connect educators engaged in making. Menlo Park, CA: SRI International.

Sheridan, K., Halverson, E., Litts, B., Brahms, L., Jacobs-Priebe, L., & Owens, T. (2014). Learning in the making: A comparative case study of three makerspaces. Harvard Educational Review, 84 (4), 505-531.

Shute, V. J., Ventura, M., Bauer, M., & Zapata-Rivera, D. (2009). Melding the power of serious games and embedded assessment to monitor and foster learning. Serious games: Mechanisms and effects, 2, 295-321.

Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied measurement in education, 13(2), 181-208.