The Design Principles Documentation Project followed the DML Badges for Lifelong Learning awardees as they proposed and implemented their badging systems. The DPD team categorized badging projects' practices in terms of how they recognized, assessed, motivated, and studied learning.
The Seed Phase presents the assessment principles we found and links to the more specific projects that enacted those principles. The Sprout Phase presents external resources linked to the principles. The Bloom Phase presents the big ideas we learned about assessment and badges
The Design Principles Documentation (DPD) Project followed the DML Badges for Lifelong Learning awardees as they proposed and implemented their badging systems. The DPD team categorized badging projects' practices in terms of using badges to recognize, assess, motivate, and study learning.
The Seed Phase presents the recognition principles we found and links to the specific projects that enacted those principles. The Sprout Phase presents external resources linked to the principles. The Bloom Phase presents the big ideas we learned about recognition and badges.
The Digital Media and Learning 2012 Badges for Lifelong Learning Initiative funded 30 projects to develop digital badge systems. We captured the emerging knowledge among these projects, as they designed, implemented and maintained their badge programs. First, we identified the specific practices that the 30 projects intended for recognizing, assessing, motivating, and studying learning with digital badges. We then clustered and organized those practices into general design principles in each of these four categories.
This work focuses on the badging projects' assessment practices, and the implications of their assessment choices for learning. Many projects had thought hard about the kind of learning outcomes they wanted to foster, and the DPD team aimed to find the alignment between assessment practices and intended learning outcomes. Many practices shifted from the original intentions to accommodate the badging platform or the audiences with whom the projects were working.
Common practices were then categorized more general design principles for assessing learning in digital badge systems. This resulted in the ten principles summarized below. These are not offered as "best practices" for assessing learning in badge systems. Rather, these design principles point new and existing projects toward appropriate practices in specific contexts.
Generally speaking, the DPD Project is informed by sociocultural theories of learning that emphasize the contextual nature of knowledge. The goals were well suited for the approach to theorizing associated with design-based research. While the DPD team was not advising projects, the projects were iteratively refining their practices. By documenting this process more systematically across projects, we captured emergent local theories about recognizing learning with digital badges in particular contexts.
We have derived ten design principles for assessing learning in digital badge systems from the common enacted practices that emerged as systems were implemented. These principles were then broken down into specific principles that were used in different projects.
Use leveled badge systems (29): 29 of the 30 projects included some kind of “leveling” system that students would move through as they practiced new skills, as opposed to a “flat” system where all badges have equal value.16 used competency levels for mastery, ten used smaller badges that added up to larger achievements we called metabadges, and three used categories like leadership and collaboration. Example: Small badges for activities will add up to larger mastery badges (BuzzMath).
Align assessment activities to standards: Create measurable learning objectives (26): Standards varied from national and state standards to internal standards set by the parent organizations of the projects. Ten projects used internal standards, seven used national or state standards, and nine used Common Core standards. Example: There is already a large community of teachers within the Make Waves community who are mapping the S2R curriculum to their own objectives and standards (S2R Medals).
Use rubrics (16): Projects used rubrics as an aid to score learner artifacts. Twelve projects developed rubrics for the specific artifacts they were assessing, while four projects used general pre-existing rubrics. Example: Rubrics are competency based and generated ad hoc by individual teachers. However, the project is looking to standardize the process and pull the rubrics into a system (LevelUp). For example:
Use formative functions of assessment (15): Projects provided varying types and amounts of formative feedback to learners. Five projects used primarily peer feedback, three projects used primarily expert feedback, and seven projects used a combination of peer and expert feedback. Example: At high school level peers participate as panel judges for the final demonstrations and review the student demonstration with a rubric. Peers are also expected to provide formative assessment on peer blog entries online (Pathways for Lifelong Learning).
Enhance validity with expert judgment (12): Projects used expert judges to evaluate learner artifacts. Nine projects used primarily teacher or professional experts, two used primarily a computer scoring system, and one project used AI Tutors. Example: Badges are validated by community mentors. Peer feedback is given and used for refinement purposes. Badges are not awarded specifically because of feedback given by peers, but the artifacts that earn badges may be influenced by that feedback (Design for America).
Promote "hard" and "soft" skill sets (11): Projects promote “soft skills” like leadership and collaboration in addition to the “hard skills” they promote. Example: The project wants "the assessment process to be as social as the learning is." There is a feedback loop in the workplace, and they want learning to mirror that organic process (MOUSE Wins!).
Use e-portfolios (8): Projects required learners to collect artifacts in a digital portfolio. One of these portfolio systems was open to the public, while seven of them were local to the community. Example: “Every S2R participant has their personal Reporter Page on www.makewav.es/s2r This serves as an e-portfolio and permits their educators, supporters, friends, family and peers to see and evaluate their work” (S2R Medals).
Use performance assessments in relevant contexts (7): Projects used performance assessments to evaluate learners. Example: “The badges for each curricular area will be earned through written assessments, photo and video projects, and in-person demonstrations of proficiency” (Sweetwater AQUAPONS).
Use mastery learning (6): In this context we use the term “mastery learning” to mean that learners are given practice until they have mastered a single skill set, and then move to the next skill set. Two projects had human experts judge the artifacts, one had a computer expert judge the artifacts, and three projects used a combination of human and computer experts. Example: Badges in activities supported by AI tutors are validated through the AI tutor and through automated online testing (through Moodle), or automated detection of in-game events (through Unity) in simulator environments. Instructor approval is used where appropriate in addition to automated tools (CS2N).
Involve students at a granular level (3): Projects involved students in the design of the physical badges, as well as in the design of the pathways one must take to earn a badge. Example: In the Badge Constellation Design Process, Cooper-Hewitt is realizing that the badges should have "personality" and personal touches added by students. The process of designing a badge reflects the process that goes into receiving a badge (Design Exchange).
In this Example, we aim to document and organize the emergent processes and practices of the DML projects into general design principles for assessing learning in the context of digital badge systems. Our goals are to share the general design principles to individuals who are interested in implementing digital badge systems. In the process, we have recorded the intended practices and goals of the DML projects.
Our immediate audience is anyone designing a digital badging system for learning. New projects can learn a great deal from the shifts projects made from their initial proposals to their enacted designs. However, these design principles for assessing learning point to practices that almost anyone designing a learning system might consider as they align their assessments to learning outcomes. While they emerged out of badging systems, the design principles have a broad relevance for assessment systems more generally.
As practices emerged, we immediately saw tensions between the practices, which led us to categorize the practices into principles that can be considered when building a digital badge system. Recognition practices directly impact assessment in that they way one wants to recognize learning can constrain the assessment choices a designer makes.
Another set of challenges arose when projects tried to implement their assessments in their badge platforms. In many cases, the features of the badge platform limited and constrained the kind of assessment a project could implement. For example, Who Built America had intended to implement peer assessment, but their platform did not support it. Instead, they desgined activites that had to be done collaboratively, so the peer collaboration existed, but in a different form than what was intended.
Like digital badges, this is uncharted territory. We are looking across projects to see they systematic design principle that emerge. The closest example of this kind of work that we know about is the Design Principles Database project led by Yael Kali and Marcia Linn. Their project also captured design knowledge across multiple projects and helped share that knowledge. To do so, they distinguished between specific practices within projects and more general principles across projects.
This work also relates to many other projects' efforts:
Glasslab is investigating the validity and reliability of sophisticated computer-based assessments in videogame contexts. These assessment employ the evidence-centered design framework pioneered by Robert Mislevy and which use sophisticated systems to create statistical models of student understanding. This work does not directly relate to digital badges but provide a good example of cutting edge formative and summative assessment that might be useful in badge settings.
Sheryl grant and Kristan E Schwago's annotated bibliography hold several articles that are useful for designing assessments in digital badge systems. One such article, Barak and Rafaeli (2004) discusses the effectiveness of question-posing and peer assessment. They found that learners engaged in peer assessment and question posing received higher grades on their final exams. Peer assessment can be a valuable tool when implemented well. Many projects are considering using peer assessment, but it difficult to implement. The project Pathways for Lifelong Learning allows learners to participate as panel judges for final demonstrations of projects using a rubric. Learners are also expected to provide formative feedback to their peers on their blog entries. Pathways for Lifelong Learning has set up their peer assessment in such a way that it empowers learners and helps them engage in their own projects as well as in their peers’ projects.
We welcome any feedback and ideas on the design principles AND the Working Example. For example, if you realize we are using different words for something familiar to you, let us know. And if there is anything we can do to make it easier to find the information you need, please let us know. In particular we would like feedback from people with perspectives that are different from ours. You can put comments at the bottom of each entry.
The badge Design Principles Documentation Project team is led by Daniel Hickey, an Associate Professor in the Learning Sciences program at Indiana University. Learning Sciences PhD student Rebecca Itow is leading the assessment strand of this project, while Hickey, Andi Rehak, Christine Chow, Kat Schenke, and Cathy Tran lead the other strands. Nate Otto is working on the DML project Working Examples and other aspects of the project. Tara Kelly, Garrett Poortinga, and Thomas Smith are helping out with various things.
First, we clustered the specific practices of the 30 DML projects into the nine general design principles (described in the Seed phase). These principles (and the practices behind them) are a great context for organizing published research and other external resources so they are useful for others. We are just getting started, and hope others will contribute as our understanding of the practices, principles, and resources evolves.
As we talked to and observed different badge systems, we noticed a series of common practices that emerged. These practices took on different qualities depending on the context in which they are being implemented, but the similar practices guided the derivation of the ten design principles.
We then looked to literature to support the design principles. We looked at academic sources, blogs, and projects. In reviewing the literature, we confirmed that no one badge project will use all of the principles, as some of them have competing values, but that all of the principles present issues that badge system developers must resolve as they design their assessments.
We have gathered relevant assessment literature that can help badge system designers make informed decisions when designing assessments for their learning systems. These are examples of the kinds of articles we are finding:
Use leveled badge systems: Bulkley et al. (2010) found that answers marked incorrectly did not help teachers in designing specific reteaching strategies.
Align assessment activities to standards: Create measurable learning objectives: Darling-Hammond (1997) discusses the need to raise standards and the system in which they are employed to support teaching and learning.
Use rubrics: Many of the projects used rubrics. For the more summative and high stakes assessents, there is plenty of literature about increasing the reliability of scores and the content validity of the evidence. For example:
- Popham (1997) provides a succinct list of guidelines one should consider when creating and using rubrics.
Tierney and Simon (2004) provided a nice update on these issues.
We suspect that many badge projects are going to be more interested in the more formative assessment functions and peer assessment. They may find the more summative rubric guidelines problematic. Much of this work has been done in the area of writing assessment:
- Heidi Andrade (2000, Andrade & Yu, 2005, Andrade & Boulay, 2003) have done some really useful research on formative uses of rubrics. While it is colleage writing it is still generally useful.
- Anderson et al (2007) did a nice study on using informal rubrics to guide "feedback conversatons" This work build on pioneering work by Duschl and Gitomer (1997) on "assessment conversations"
Use formative functions of assessment: Of course, many projects sought to take advantage of formative assessment functions to support learning. There is a vast literature out there that might be relevant to formative assessment with badges. For example
- Schwartz & Arena (2009) make the case for choice-based assessments. Many researchers have argued that giving formative feedback enhances the learning experience (e.g. Black & Wiliam, 2009; Shepard, 2007) but Schwartz and Arena argue that the skill of knowing how to ask for formative feedback is a skill not being taught.
Enhance validity with expert judgment: Popham's (2007) chapter on validity highlights the information and practices teachers should consider to enhance the validity of the claims they make about learning.
Promote "hard" and "soft" skill sets: Schulz (2008) discusses the outcry for students to develop “soft skills” beyond academic knowledge.
Use e-portfolios: Gillespie et al. (1996) provide a review of the recent literature on portfolio assessment and address the topic of private and public portfolios.
Use performance assessments in relevant contexts: Mehrens, Popham, & Ryan, (1998) provide six guidelines for using performance assessment, and suggest that instructors should be careful in how they prepare students for such assessments lest they compromise the assessment.
Use mastery learning: Duncan & Hmelo-Silver (2009) define and discuss learning progressions, and advocate focusing on a smaller set of focused skills rather than a large set of skills in a perfunctory manner.
Involve students at a granular level: Stefani (1994) studies student marks and grades, and their effectiveness in comparison to teacher marks.
We will continue to refine the design principles as the DML projects keep moving ahead with their implementation of digital badge systems. The design principles will take into account the enacted practices of the projects and reflect the process and approach of badge systems.
The “things that we are creating” here are a set of design principles and related resources. We are trying to make sure that our audience (badge designers) adopts these categories for their own work so that they can communicate what they want to find out and what they learn more readily. We are trying to frame the ideas of the academic research in ways that practitioners and access and use them.
Different badge systems are already using the principles to connect with one another and think about their assessment choices. The broader community is engaging in excellent discussions around the principles and offering insights as to what badge systems to consider as they implement their designs. The database will be open to anyone interested in designing badge systems, offering knowledge and key considerations that can be applied in practice.
Just as the blog post at HASTAC is attracting new comments and suggesting new resources, we hope that others will find this a natural place to share the resources that they locate. We also hope to scale by publishing a formal literature review on recognizing learning with digital badges in widely-read peer-reviewed research review journals.
The project can provide a valuable resource and information to individuals across disciplines who want to implement digital badge systems. New badge systems can use the principles and the research literature to inform their decisions about their implementations. These resources can also further discussion about designing assessments in digital badge systems.
We captured the 10 practices across the DML projects and identified a set of general design principles for assessing learning in digital badge systems. As shown in the projects, teh claims digital badges make are directly affected by the project's assessment practices. Many projects awarded badges as a result of assessing learning and accomplishment that encompass a broad array of skills and abilities. Additionally, the assessing of learning interacts with and affects the strands of recognizing and motivating learners.
There are many considerations for assessment practices of badge design systems. The awarding of digital badges involves some measurement of learners’ digital engagement and learning experience. Through the assessment of learning, the projects award digital badges for completing certain tasks or demonstrating understanding, signaling to employers and institutions learners’ skills, experience, knowledge and accomplishments. The projects employ various modes of assessing learning outcomes, including e-portfolios, automated grading and expert grading. In addition, projects can involve students in the process of setting standards or criteria for the assessment of learning outcomes. Moreover, the projects include formative and summative assessment, scaffolding the learning process and providing feedback to learners on how to improve. Overall, the design principles for assessment design offer important considerations for projects as they implement their designs.
At their core, digital badges recognize some kind of learning. But if one is going to recognize learning, there is usually some kind of assessment of that learning so that claims about learning can be substantiated by evidence. The design principles and the research literature provide projects with the resources they need to make well-informed decisions about the assessments they choose in their implementations.
Our biggest success was synthesizing the hundreds of practices between 30 badging projects and distilling them into four sets of general design principles. We then connected those general principles to existing literature, providing a set of research for a new phenomenon where little currently exists.
From the assessment practices of the DML projects, we derived ten principles for general badge design systems. The principles reflect existing assessment practices of badge design systems. It can offer principles for project applications and valuable information to individuals interested in implementing badge design systems.
The biggest challenges arose when the tensions within the practices adn between strands (recognizing, assessing, motivating, and studying learning) emerged. Rather than try to resolve these tensions, however, we decided to acknowledge them as issues that badge system developers must consider when choosing a platform and designing thier system.
We learned that the DML projects possessed important knowledge about the practices of implementing digital badges and realized the significance of documenting that knowledge for future and existing badge systems. In face of the challenges, the DML projects found ways to address these issues in their enacted practices. The projects described what they learned as they implemented digital badge systems. We realized quickly that it was important to capture that knowledge before dissipated as teams moved forward with their designs.
We also learned that the contexts in which practices were implemented were extremely important to the learning outcomes of the assessments, so rather than try to derive principles for best practices, we derived principles to guide appropriate practices in specific contexts. For example, many projects were using formative feedback, but the different contexts of the projects necessitated different types of feedback. Some of it came from teacher or professional experts, some came from peers, and other projects used a combination of expert and peer feedback.
As part of the DPD project, we are sharing the design principles for assessing learning as a set of general guidelines that can adapted for use in various badge systems. Individuals can apply the principles to their assessment practices in the context of learning with digital badges. Some principles may be relevant only to certain contexts, and not every project will use every principle, but new projects can employ the principles to see how they work in specific contexts, and existing projects can use them to talk to other projects and refine their practices.
We will continue to refine the design principles and add to the relevant research literature as projects evolve, people connect, and conversations develop around these resources. The next step is to follow the projects that persist after their funding from the DML initiative has ended, and refine the principles further considering the successes of the different projects.
The design principles are shared with the broader community of individuals interested in creating digital badge systems. We are making our findings available for new projects to apply to their efforts and existing projects to discuss with other projects and fine-tune their badge systems.
“Eventually we want the badge design principles database to be fully open and self-sustaining. What we really want to do is leave behind a network where these principles are continually refined and spread like “memes” across the open badges ecosystem” (Hickey 2012, via Hastac). We want to foster conversations around the principles, research literature, and practices of badge systems so that these systems can grow and make informed decisions about their designs.