Skip to main content

Platform Design: Research Directions (Part 5 of 6)

This course has been a welcome primer on the essentials of OER: from the philosophical foundations of openness to the reasons for each of the five R’s to the provenance and application of Creative Commons.  We’ve had an introduction to and overview of an emerging field.  Of course, the domain of knowledge about OER runs much deeper than the overarching concepts we’ve been exposed to here, but revisiting them at a high level has been a healthy and helpful refresher of what many of us have seen emerge over the past ten years.

It was about ten years ago when I first had an idea for a platform for sharing.  Back then, building it would have been cutting edge.  Now, enthusiasm for yet another OER platform seems pretty tepid, maybe even worthy of a sigh.  So I’d like to ask this community for some advice.

To do so, I’d like to share some of my key takeaways and impressions from the course, and ask for counsel on appropriate next steps for a project.


Some context: while most people on this course seem to work in higher ed, I teach high school English, and when I began teaching 18 years ago I sought to gather the wisdom of my colleagues in a file cabinet in our department office.  (How quaint!)  After five years, I left teaching to become a musician in New York City.  It was excellent, but I missed the purpose-driven life of an educator, and so I returned to graduate school, and then back to teaching.  During this time, which also brought the advent of social networks and collaborative content management systems, I marveled (in disbelief) that no such network or platform had succeeded at creating a professional knowledge base for educators.  Certainly, this was not for want of trying.  Many efforts had and have since emerged, but none have succeeded at unifying the field.

When thinking about platforms, a few design principles occurred to me then, and, surprised at how infrequently they were adopted or experimented with, I set out about five years ago to test them.  It has been an experience of learning to build and design an organization, to collaborate and lead.  Not the same skills as being a high school teacher.


This course has validated some of the lessons learned during this time of building a small nonprofit and testing it with teachers.  Many mistakes were made.  Many are yet to be made, too, I’m sure.  Still, many successes have come also: implementation of the prototype for lesson study in a graduate school of education, validation by users in structured environments, and anecdotes that make a maker smile.

This course has also shed light on some essential truths that had been lurking underneath the surface of the work so far, but I hadn’t yet articulated.  Week by week, they unfolded like this:

Week one: I set out some of my hypotheses and first impressions in my first post: “Learning by Sharing: Why We Do, Sometimes Can’t, and Often Don’t”  The key idea in this post is that when building OER platforms, design matters.  What I don’t fully write there is that two key design principles that seemed central to me from the beginning of this work have been rarely implemented in the landscape: a topic-oriented architecture (like Wikipedia) and direct interaction with content (work isn’t trapped in documents, but engaged directly in the browser).  In my questions at the end of this post, I inquire about how to test more formally the significance of these observations and assumptions.

Week two: My post “The Commons: It’s the Community, Stupid” was about a key revelation: the commons isn’t really about the shared resource, the commons is really about our behavior in relation to the shared resource.  Building a repository or a platform won’t change education unless the repository or platform changes the way educators interact with it and each other.  OER are useful only insofar as we use them, and that’s what the community is about.

Week three and week four: both of these lessons and discussions helped reinforce how we can construct knowledge in a way that resolves the reusability paradox.  This challenge has been articulated for years in another way by Justin Reich: that “curriculum doesn’t compile.”  In education, practical knowledge grows like a tree, becoming customized to our classrooms, rarely converging, and so reusability requires revision and remixing.  Without seamless and easy revision and remixture, education resources will languish.


These statements can be articulated in three hypotheses:
  1. Finding content is easiest in a topic-oriented knowledge architecture. (Week one: design)
  2. Revising and remixing content easily requires direct (non-document-intermediated) interaction with content. (Weeks three and four: usability)
  3. Building community requires patterns of human interaction around content that are driven by fundamental professional or personal needs, matching intellectual demand and supply. (Week two: the commons)


Over the past four years, with the informal and formal help of many people and the generosity of four schools and two foundations, I’ve developed a prototype platform.  It’s a third generation prototype, growing incrementally towards a coherent vision.  The platform is still closed, because it’s not in a form that is ready for public consumption, but it has a few hundred users and feedback is positive.  We have not yet been systematic and rigorous in our testing—mostly because this work has always been in addition to a full time job (but that feels like an excuse)—but we have also not been shy about gathering lots of feedback, if not through scientific methodologies.

At this point, we’ve developed a topic-oriented architecture (hypothesis 1) and teachers access content directly (hypothesis 2).  But we haven’t yet activated tools for revising and remixing (hypothesis 2) because the chain of repercussions that will follow once peer revision is possible requires design elements not yet implemented.  Lastly, we’ve built the most basic community interaction tools (hypothesis 3), but not yet the planned ones that are essential for matching intellectual demand and supply.

Developing and implementing these final components of a working prototype are the focus of several grant applications currently in the works. (Grateful for references if you know people!)


Of course, like any responsible person, I’ve wondered if this has an element of tilting at windmills, of throwing resources at an intractable problem, chasing after some imaginary, inevitably-elusive vision of systemic change.  A well-respected researcher said in a call recently: lots of really smart people have spent hundreds of millions of dollars working on this—what makes you think your approach is any different?  (The same researcher also said that the expertise of those immersed in a subject sometimes gets in the way of seeing the benefits of novel solutions.)  So why persist?  In this case, it’s the overwhelmingly positive feedback from teachers--for whom this work is done--that has continued to propel this project.

And so, since budgets are limited and the scope and cost of the work is growing, the work has transitioned to a phase in which I/we aim to be more rigorous in research and more scientific in testing.  Since the OpenEdMOOC community (especially its professors) are as closely tied to this work as anyone, I have three questions that I’d be grateful for guidance on to ensure that this is a meaningful application of resources (not only money, but also years of my life and others’ too):


  • First, could someone offer guidance on what work has been done formally testing/researching knowledge architecture paradigms?  Since the 90s, the biggest paradigm battle was between Yahoo and Google, between trees and search.  Search won.  But Wikipedia models another information architecture paradigm.  And Reddit another.  These have thrived, if imperfectly.  What knowledge architectures have been tested in OER?  Of course, I’m particularly interested in: has a topic-oriented approach been tested?

  • Secondly and similarly, has there been much research on successfully scaling educator communities?  Edmodo created a user group for Language Arts teachers that has 550K members (though not all active).  Ask a question there and you’ll get an answer.  TeachersPayTeachers has created a supply of resources that covers virtually every content area.  Want something to teach? You can find it there, but you have to pay for it.  One is a successful community (if not a successful business).  The other is a successful content repository (if not an open community).  We’ll succeed when the twain meet.  Is there recent research on essential components for scaling communities?  My favorite so far is the US DOE’s Exploratory Research on Designing Online Communities, which offers questions, if not directions.

  • Lastly, do you have recommendations of models for rigorous user (teacher) testing?  To a researcher, this question is, of course, hopelessly vague—there are thousands of models for testing—but the question is meant to surface reports that offer replicable methodologies that are broad-reaching in their analysis of how teachers behave online when interacting with educational content or each other… and that are also understandable to someone who doesn’t know what a Q-methodology is.   My favorite example is from work done by the Smithsonian Institute in 2012 during their Digital Learning Resources Project.  Are you aware of other research that a (very) small team could emulate?


These are questions designed to help determine whether this work is a fruitful application of time and funds—and what next steps for this project make sense. 


Popular posts from this blog

Four Ways to Measure Creativity

Assessing creative work has been a bugaboo for a good long time.  In schools it's the constant refrain: “How can you grade creative writing?”  or “It’s a poem: however it comes out is right.”  In businesses and elsewhere, people demand innovation--and are stymied with understanding how to measure it. But this is not the bugaboo we think it is--in the classroom, or in the broader world of creative work.  Here are four different ways to assess creativity, each designed for different settings: 1. Measuring How Creative a Person Is - The Guilford Model 2. Measuring How Creative a Work Is - The Taxonomy of Creative Design 3. Measuring Creative Work Against a Program - The Requirements Model 4. Measuring the Social Value of Creative Work - Csikszentmihalyi’s Model Notably, in each of these cases, what we mean by "creative" changes a little.  Sometimes "creativity" refers to divergent production (how much one produces, or how varied it is).  Sometimes "c

Taxonomy of Creative Design

Strategies to improve creativity are many, but they are also diffuse.  Little ties them together in a way that offers a coherent vision for how creativity can be understood or developed incrementally.  The Taxonomy of Creative Design, a work in progress, offers a new theory for doing so. Since creative work can be measured along spectrums of both and form and content, the Taxonomy of Creative Design offers a progression from imitation to original creation measured in terms of form and content.  In doing so, it organizes creative works into an inclusive, unifying landscape that serves not only as an analytical lens through which one might evaluate creative work, but also as a methodical approach to developing creative skills. Here is a closer look: Imitation Imitation is the replication of a previous work.  It is the painter with an easel at the museum, painting her own Mona Lisa; it is the jazz musician performing the solo of the great artist note for no

A Cognitive Model for Educators: Attention, Encoding, Storage, Retrieval (Part 2 of 14)

So how do  people learn?  What are the mechanics of memory?  Can we distill thousands of articles and books to something that is manageable, digestible, and applicable to our classrooms?   Yes.   In brief, the cognitive process of learning has four basic stages: Attention : the filter through which we experience the world Encoding : how we process what our attention admits into the mind Storage : what happens once information enters the brain Retrieval : the recall of that information or behavior Almost everything we do or know, we learn through these stages, for our learning is memory, and the bulk of our memory is influenced by these four processes: what we pay attention to, how we encode it, what happens to it in storage, and when and how we retrieve it. Here’s a closer look at each: Attention: We are bombarded by sensory information, but we attend to only a small amount of it.  We constantly process sights, sounds, smells, and more, but our attention se