How To Participate In Digipo (September 2017 version)

How To Participate In Digipo (September 2017 version)

Every time I say I can’t make it easier to participate in Digipo, I find a way to make it easier.

The current process involves no skills greater than knowing how to work a word processor, and (more importantly) allows students to participate anonymously if they wish, without having to sign up for Google accounts or have edits tracked under pseudonyms. We accomplish this through a Microsoft Word template and by submitting the files into public domain.

You can of course use a more complex process, sign your name to the article, and use Google Docs as your central tool. Depending on your needs and skill level you may want to do that. It’s just not required anymore.

Here’s the steps.

  1. Read (at least some) of the book.
  2. Pick a question to investigate from our list of 300+ questions, or make up your own.
  3. Have your students download this Microsoft Word template that guides them through an investigation of a question. Apply the skills from the book.
  4. Do whatever sort of grading, assessment, or feedback you want.
  5. Take student reports where the students have agreed to submit them into public domain, and zip up the word documents. Mail them to michael.caulfield@wsu.edu. Make sure you introduce who you are, what the class is about, and a bit about your experience as I do not open zip files from random people. Also give me a blurb about how your class would like to be identified on the site (they have the option  of remaining anonymous too). For verification purposes, send it from your university account. I may email back to verify.
  6. I’ll put them on the Digipo site in a subdirectory with a bit about your class and give you a password that allows them to edit online going forward.
  7. At a later point we’ll assemble a small panel of professors who will go through the student work and choose ones to “promote” to the main directory based on quality. The key question reviewers will ask is whether the document provides better information than at least one of the top ten Google results for the question.

That’s it!

 

 

from Hapgood http://ift.tt/2fpamYN
via IFTTT

Advertisements

The History of the Future of Learning Objects and Intelligent Machines

This talk was delivered at MIT for Justin Reich’s Comparative Media Studies class “Learning, Media, and Technology.” The full slide deck is available here.

Thank you for inviting me to speak to your class today. I’m really honored to be here at the beginning of the semester, as I’m not-so-secretly hoping this gives me a great deal of power and influence to sow some seeds of skepticism about the promises you all often hear – perhaps not in this class, to be fair, as in your other classes, in the media, in the world at large – about education technology.

Those promises can be pretty amazing, no doubt: that schools haven’t changed in hundreds if not thousands of years and that education technology is now poised to “revolutionize” and “disrupt”; that today, thanks to the ubiquity of computers and the Internet (that there is “ubiquity” is rarely interrogated) we can “democratize,” “unbundle,” and/or “streamline” the system; that learning will as a result be better, cheaper, faster.

Those have always been the promises. Promises largely unfulfilled.

It’s important – crucial even – that this class is starting with history. I’ve long argued that ignorance of this history is part of the problem with education technology today: that its promises of revolution and innovation come with little to no understanding of the past – not just the history of what technologies have been adopted (or have failed to be adopted) in the classroom before, but the history of how education itself has changed in many ways and in some, quite dramatically, with or without technological interventions. (I’d add too that this is a problem with tech more broadly – an astounding and even self-congratulatory ignorance of the history of the industries, institutions, practices folks claim they’re disrupting.)

I should confess something here at the outset of my talk that’s perhaps a bit blasphemous. I recognize that this class is called “Learning, Media, and Technology.” But I’m really not interested in “learning” per se. There are lots of folks – your professor, for starters – who investigate technology and learning, who research technology’s effect on cognition and memory, who measure and monitor how mental processes respond to tech, and so on. That’s not what I do. That’s not what my work is about.

It’s not that I believe “learning” doesn’t matter. And it’s not that I think “learning” doesn’t happen when using a lot of the ed-tech that gets hyped – or wait, maybe I do think that.

Rather, I approach “learning” as a scholar of culture, of society. I see “learning” as a highly contested concept – a lot more contested than some researchers and academic disciplines (and entrepreneurs and journalists and politicians) might have you believe. What we know about knowing is not settled. It never has been. And neither neuroscience nor brain scans, for example, move us any closer to that. After all, “learning” isn’t simply about an individual’s brain or even body. “Learning” – or maybe more accurately “learnedness” – is a signal; it’s a symbol; it’s a performance. As such, it’s judged by and through and with all sorts of cultural values and expectations, not only those that we claim to be able to measure. What do you know? How do you know? Who do you know? Do you have the social capital and authority to wield what you know or to claim expertise?

My work looks at the broader socio-political and socio-cultural aspects of ed-tech. I want us to recognize ed-tech as ideological, as a site of contested values rather than a tool that somehow “progress” demands. Indeed, that’s ideology at work right there – the idea of “progress” itself, a belief in a linear improvement, one that’s intertwined with stories of scientific and technological advancement as well as the advancement of certain enlightenment values.

I’m interested not so much in how ed-tech (and tech more broadly) might change cognition or learning, but in how it will change culture and power and knowledge – systems and practices of knowing. I’m interested in how ed-tech (and tech more broadly) will change how we imagine education – as a process, as a practice, as an institution – and change how we value knowledge and expertise and even school itself.

I don’t believe we live in a world in which technology is changing faster than it’s ever changed before. I don’t believe we live in a world where people adopt new technologies more rapidly than they’ve done so in the past. (That is argument for another talk, for another time.) But I do believe we live in an age where technology companies are some of the most powerful corporations in the world, where they are a major influence – and not necessarily in a positive way – on democracy and democratic institutions. (School is one of those institutions. Ideally.) These companies, along with the PR that supports them, sell us products for the future and just as importantly weave stories about the future.

These products and stories are, to borrow a phrase from sociologist Neil Selwyn, “ideologically-freighted.” In particular, Selwyn argues that education technologies (and again, computing technologies more broadly) are entwined with the ideologies of libertarianism, neoliberalism, and new forms of capitalism – all part of what I often refer to as the “Silicon Valley narrative” (although that phrase, geographically, probably lets you folks here at MIT off the hook for your institutional and ideological complicity in all this). Collaboration. Personalization. Problem-solving. STEM. Self-directed learning. The “maker movement.” These are all examples of how ideologies are embedded in ed-tech trends and technologies – in their development and their marketing. And despite all the talk of “disruption”, these mightn’t be counter-hegemonic at all, but rather serve the dominant ideology and further one of the 21st century’s dominant industries.

I want to talk a little bit today about technology and education technology in the 20th century – because like I said, history matters. And one of the ideological “isms” that I think we sometimes overlook in computing technologies is militarism. And I don’t just mean the role of Alan Turing and codebreakers in World War II or the role of the Defense Department’s Advanced Research Projects Agency in the development of the Internet (although both of those examples – cryptography and the Internet – do underscore what I mean when I say infrastructure is ideological). C3I – command, control, communications, and intelligence. Militarism, as an ideology, privileges hierarchy, obedience, compliance, authoritarianism – it has shaped how our schools are structured; it shapes how our technologies are designed.

The US military is the largest military in the world. That also makes it one of the largest educational organizations in the world – “learning at scale,” to borrow a phrase from this course. The military is responsible for training – basic training and ongoing training – of some 1.2 million active duty soldiers and some 800,000 reserve soldiers. That training has always been technological, because soldiers have had to learn to use a variety of machines. The military has also led the development and adoption of educational technologies.

Take the flight simulator, for example.

One of the earliest flight simulators – and yes, this predates the Microsoft software program by over fifty years, but postdates the Wright Brothers by only about twenty – was developed by Edwin Link. He received the patent for his device in 1931, a machine that replicated the cockpit and its instruments. The trainer would pitch and roll and dive and climb, powered by a motor and organ bellows. (Link’s family owned an organ factory.)

Although Link’s first customers were amusement parks – the patent was titled a “Combination training device for student aviators and entertainment apparatus” – the military bought six in June of 1934, after a series of plane crashes earlier that year immediately following the US Army Air Corps’ takeover of US Air Mail service. Those accidents had revealed the pilots’ lack of training, particularly under night-time or inclement weather conditions. By the end of World War II, some 500,000 pilots had used the “Link Trainer,” and flight simulators have since become an integral part of pilot (and subsequently, astronaut) training.

(There’s a good term paper to be written – you are writing a term paper, right? – about the history of virtual reality and the promises and presumptions it makes about simulation and learning and experiences and bodies. But mostly, I’d argue if I were writing it, that much of VR in classrooms today does not have its origins the Link Trainer as much as in the educational films that you read about in Larry Cuban’s Teachers and Machines. But I digress.)

The military works along a different principle for organizing and disseminating knowledge than does, say, the university or the library. The military is largely interested in teaching “skills.” Or perhaps more accurately, this is how military training is largely imagined and discussed: “skills training.” (Officer training, to be fair, is slightly different.) The military is invested in those skills – and in the teaching of those skills – being standardized. All this shapes the kinds of educational software and hardware that gets developed and adopted.

One of the challenges the military has faced, particularly in the twentieth century, is helping veterans to translate their skills into language that schools and civilian hiring managers understand. This is, of course, the origin of the GED test, which was developed during WWII as a way to assess whether those soldiers who’d dropped out of high school in order to enlist had attained high-school level skills – to demonstrate “competency” rather than rely on “seat time,” to put this in terms familiar to educational debates today. There has also been the challenge of translating skills within the military itself – say, from branch to branch – and within and across other federal agencies. New technologies, to a certain extent, have complicated things by introducing often incompatible software systems in which instruction occurs. And at the end of the day, the military demands regimentation, standardization – culturally, technologically.

I just want to lay out an abbreviated timeline here to help situate some of my following remarks:

I’m not suggesting here that the Web marks the origins of ed-tech. Again, you’ve read Larry Cuban’s work; you know that there’s a much longer history of teaching machines. But in the 1990s, we did witness a real explosion in not just educational software, but in educational software that functioned online.

In January of 1999, President Clinton signed Executive Order 13111 – “Using Technology To Improve Training Opportunities for Federal Government Employees.” Here’s the opening paragraph, which I’m going to read – apologies – simply because it sounds as though it could be written today:

Advances in technology and increased skills needs are changing the workplace at an ever increasing rate. These advances can make Federal employees more productive and provide improved service to our customers, the American taxpayers. We need to ensure that we continue to train Federal employees to take full advantage of these technological advances and to acquire the skills and learning needed to succeed in a changing workplace. A coordinated Federal effort is needed to provide flexible training opportunities to employees and to explore how Federal training programs, initiatives, and policies can better support lifelong learning through the use of learning technology.

One of the mandates of the Executive Order was to:

in consultation with the Department of Defense and the National Institute of Standards and Technology, recommend standards for training software and associated services purchased by Federal agencies and contractors. These standards should be consistent with voluntary industry consensus-based commercial standards. Agencies, where appropriate, should use these standards in procurements to promote reusable training component software and thereby reduce duplication in the development of courseware.

This call for standards – and yes, the whole idea of “standards” is deeply ideological – eventually became SCORM, the Sharable Content Object Reference Model (and one of the many acronyms that, if you work with education technology, will make people groan – and groan almost as much as a related acronym does: the LMS, the learning management system).

Indeed, SCORM and the LMS – their purposes, their histories – are somewhat inseparable. (And I want you to consider the implications of that: that the demands of the federal government and the US military for a standardized “elearning” experience has profoundly shaped one of the foundational pieces of ed-tech that is used today by almost all colleges and increasingly even K–12 schools.)

The SCORM standard was designed, in part, to make it possible to easily move educational content from one learning management system to another. Among the goals: reusability, interoperability, and durability of content and courses. (I’m not going to go into too much technical detail here, but I do want to recognize that this did require addressing some significant technical challenges.) SCORM had three components: content packaging, runtime communications, and course metadata. The content packaging refers to the packaging of all the resources needed to deliver a course into a single ZIP file. The runtime communications includes the runtime commands for communicating student information to and from the LMS, as well as the metadata for storing information on individual students. And the course metadata, obviously, includes things like course title, description, keywords, and so on. SCORM, as its full name implies, served to identify “sharable content objects” – that is the smallest unit in a course that contains meaningful learning content by itself – content objects that might be extracted and reused in another course. The third version of SCORM, SCORM 2004, also introduced sequencing, identifying the order in which these content objects should be presented.

The implications of all this are fairly significant, particularly if we think about the SCORM initiative as something that’s helped, almost a decade ago, to establish and refine what’s become the infrastructure of the learning management system and other instructional software, as something that’s influenced the development as well of some of the theories of modern instructional design. (Theory is, of course, ideology. But, again, so is infrastructure.) The infrastructure of learning software shapes how we think about “content” and how we think about “skills” and how we think about “learning.” (And “we” here, to be clear, includes a broad swath of employers, schools, software makers, and the federal government – so that’s a pretty substantial “we.”)

I will spare you the details of decades worth of debates about learning objects. It’s important to note, however, that there are decades of debate and many, many critics of the concept – Paulo Freire, for example, and his critique of the “banking model of information.” There are the critics too who argue for “authentic,” “real-world” learning, something that almost by definition learning objects – designed to move readily from software system to software system, from course to course, from content module to content module, from context to context – can never offer. I’d be remiss if I did not mention the work of open education pioneer David Wiley and what he has called the “reusability paradox,” which to summarize states that if a learning object is pedagogically useful in a specific context, it will not be useful in a different context. Furthermore, the most decontextualized learning objects are reusable in many contexts, but those are not pedagogically useful.

But like I said at the outset, in my own line of inquiry I’m less interested in what’s “pedagogically useful” than I am in what gets proposed by industry and what becomes predominant – the predominant tech, the predominant practice, the predominant narrative, and so on.

Learning objects have been blasted by theorists and practitioners, but they refuse to go away. Why?

The predominant narratives today about the future of learning are all becoming deeply intertwined with artificial intelligence. We should recognize that these narratives have been influenced by decades of thinking in a certain way about information and knowledge and learning (in humans and in machines): as atomized learning objects and as atomized, standardized skills.

There’s a long history of criticism of the idea of “intelligence” – its origins in eugenics; its use as a mechanism for race- and gender-based exclusion and sorting. It’s a history that educational psychology, deeply intertwined with the development of measurements and assessments, has not always been forthright about. Education technology, with its origins in educational psychology, is implicated in this. And now we port this history of “intelligence” – one steeped in racism and bias – onto machines.

But we’re also porting a history of “skills” onto machines as well. This is, of course, the marketing used for Amazon’s Alexa. Developers “build” skills. They “teach” skills to the device. And it’s certainly debatable whether many of these are useful at all. But again, that’s not the only way to think about teaching machines. Whether or not something is “pedagogically useful,” here are reasons why the stories about it stick. The narrative about AI and skills is something to pay attention to – particularly alongside larger discussions about the so-called “skills gap.”

from Hack Education http://ift.tt/2jvnUGS
via IFTTT

I’ll Take That Text Extra Large: Bigify It

Some people like to spend their long airplane time watching movies or playing phone games. That’s fine. Me, I usually sleep, but on my cross country flight yesterday I wanted to try building a little HTML/jQuery tool.

Because it’s something I can use this week.

Because I can (running in my laptop’s localhost).

Well almost. I lacked the internet to look up more of the code I got wrong. But I got about 80% there.

Presenting… Bigify. The thing it does?

There is a small form field at the bottom; whatever is typed or pasted there is “bigified” on the main display.

Stunning, eh?

I have a few days this week of hand on web workshops; I won’t be doing slides, mostly web demos. There is this part when I want to have everyone go to the same site, I will make shirt URLs, but still, it seems useful to display them as Big On Screen as Possible.

There are a number of jQuery code bits to dynamically resize text to a window size, I decided to try FitText, that is what I downloaded before my trip. I also made sure I had a local copy of jQuery as served by Google just so I could play while not connected.

There’s a lot of room for improvement; I wanted to make the form field disappear when not being edited, but the show/hide jQuery worked to hide, but never to show (same with CSS hover tricks, maybe it’s the absolute positioning??). Maybe it could use query parameters to save pre-configured strings as URLs. Maybe some options for different display colors (light text a black background for contrast??).

Got ideas? Fork this thing!

Who knows? I have a flight home Wednesday, maybe I will do an update.

Update

Thank for John Johnston or forking and putting into play better interface- a form field element that replaces the display text, so it feels like editing it right in place.

SOme more tweaks coming output, but this now uses a better fonf on mobile devices (I hope).


Featured Image: Large Order Of Toast flickr photo by JD Hancock shared under a Creative Commons (BY) license

from CogDogBlog http://ift.tt/2w4GutU
via IFTTT

Visualizing future trends for education and technology

With the help of Future Trends in Technology and Education friends and Patreon supporters, we now have a first FTTE infographic.

The idea was to organize all of the 85+ trends the report tracks into a single image. This first design is aimed at appearing as one page, such as for a workshop handout.

FTTE visualization

The heart of it is the group of three main columns, which contain the bulk of FTTE content.  The very top contains the higher ed crisis or bubble trends; they appear up there because they rest on other trends, like pillars.  I showed the connection between specific technologies as they appear in the world and their educational instances (3d printing, digital video, etc) by aligning them up within a colored box.

Each trend contains countervailing trends as well.

Later I’d like to edit and compress it down to smaller sizes, as for a card.  That would most likely involve combining trends into rubrics or mega-trends, like piling VR, AR, and MR together.  I can also turn this into an interactive object, with links from each trend.

What do you think?

from Bryan Alexander http://ift.tt/2ubmINB
via IFTTT

Information Underload

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

Take, for instance, the latest news on Watson. Watson, you might remember, was IBM’s former AI-based Jeopardy winner that was going to go from “Who is David McCullough?” to curing cancer.

So how has this worked out? Four years later, Watson has yet to treat a patient. It’s hit a roadblock with some changes in backend records systems. And most importantly, it can’t figure out how to treat cancer because we don’t currently have enough good information on how to treat cancer:

“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.”

This is not just the case with cancer, of course. You’ve heard about the reproducibility crisis, right? Most published research findings are false. And they are false for a number of reasons, but primary reasons include that there are no incentives for researchers to check the research, that data is not shared, and that publications aren’t particularly interested in publishing boring findings. The push to commercialize university research has also corrupted expertise, putting a thumb on the scale for anything universities can license or monetize.

In other words, there’s not enough information out there, and what’s out there is generally worse than it should be.

You can find this pattern in less dramatic areas as well — in fact, almost any place that you’re told big data and analytics will save us. Take Netflix as an example. Endless thinkpieces have been written about the Netflix matching algorithm, but for many years that algorithm could only match you with the equivalent of the films in the Walmart bargain bin, because Netflix had a matching algorithm but nothing worth watching. (Are you starting to see the pattern here?)

In this case at least, the story has a happy ending. Since Netflix is a business and needs to survive, they decided not to pour the majority of their money into newer algorithms to better match people with the version of Big Momma’s House they would hate the least. Instead, they poured their money into making and obtaining things people actually wanted to watch, and as a result Netflix is actually useful now. But if you stick with Netflix or Amazon Prime today it’s more likely because you are hooked on something they created than that you are sold on the strength of their recommendation engine.

Let’s belabor the point: let’s talk about Big Data in education. It’s easy to pick on MOOCs, but remember that the big value proposition of MOOCs was that with millions of students we would finally spot patterns that would allow us to supercharge learning. Recommendation engines would parse these patterns, and… well, what? Do we have a bunch of superb educational content just waiting in the wings that I don’t know about? Do we even have decent educational research that can conclusively direct people to solutions? If the world of cancer research is compromised, the world of educational research is a control group wasteland.

We see this pattern again and again — companies coming along to tell us that their platform will help us with the firehose of content. But the big problem is not that it’s a firehose, but that it’s a firehose of sewage. It’s all haystack and no needle. And the reason this happens again and again is that what we so derisively call “content” nowadays is expensive to produce, and gets produced by a large number of well-paid people who in general have no significant marketing arm. To scale up that work is to employ a lot of people, but it doesn’t change your return on investment ratio. To make a dollar, you need to spend ninety cents, and that doesn’t change no matter how big you get. And who wants to spend ninety cents to make a dollar in today’s world?

Processing and promotion platforms, however, like Watson or MOOCs or Facebook, offer the dream of scalability, where there is zero marginal cost to expansion. They also offer the potential of monopoly and lock-in, to drive out competitors. And importantly, that dream drives funding which drives marketing which drives hype.

And this is why there is endless talk about the latest needle in a haystack finder, when what we are facing is a collapse of the market that funds the creation of needles. Netflix caught on. Let’s hope that the people who are funding cancer research and teaching students get a clue soon as well. More money to the producers of valuable content. Less to platforms, distributors, and needle-finders. Do that, and the future will sort itself out.


I’m guessing if you are reading this you already know this, but if you are interested in this stuff, make sure to read Audrey Watters’ This Week In Robots religiously, as  well her writing in this area, which has been very influential on me.

 

 

from Hapgood http://ift.tt/2vjxCOk
via IFTTT

Moodle and the next LMS: reflections and more questions

Last week Martin Dougiamas, the creator of Moodle, the world’s leading open source learning management system, joined our Future Trends Forum to discuss the future of that technology.

Someone on the Moodle.com site wrote up an extraordinarily rich report on the session, complete with numerous screen captures.  Bravo!

The full video recording is now available on YouTube:

During the hour Forum participants offered many questions.  Martin tackled a slew of them, yet still more came in.  I’d like to reproduce them here, edited very lightly.  They indicate the richness of today’s unfolding LMS discussion, and help illuminate where things might be headed.

Text question from Ed Finn: OER Game Changer – Moodle Community could separate it from Canvas and Blackboard who keep these types of sharing resources behind a wall.  I see it as similar to the Canvas Commons for resources with a social media component?

Text question from Sonja Strahl: Community question – Will the community you were discussing be available for only those with Moodle as their LMS (for both creation of OER and use of OER)? Or will it be open to everyone, and under Creative Commons license?  

Text question from Richard Wack: Accessibility – Blackboard recently acquired Ally which appears to be a very impressive tool to address accessibility as it pertains to courses. What is the present and future direction by Moodle on this important topic? Thank you. 

Text question from Josh: Anti-LMS – How do you respond to the anti-LMS pedagogy voices, even those who might object to an open-source tool like Moodle?  Does that debate interest you at all?

Text question from Ed Finn: Communication – Just out of curiosity, does Moodle offer social media, text and other communication coordination?  I know that Canvas has a rich development here where you can choose to communicate by email, tweet, text, or app.

Text question from Ed Finn: Versioning – What are your thoughts on different versions of Moodle and sharing between them?  How do you see the community working with multiple versions? 

What are you wondering about Moodle and the LMS, looking ahead?

My thanks to Martin and the Forum community for their generous time and thought.

 

from Bryan Alexander http://ift.tt/2ubV1RE
via IFTTT

I defy the world and go back to RSS

It may be perverse, but in this age of Facebook (now 2 billion strong) I’ve decided to rededicate myself to RSS reading.  That’s right: old school, Web 2.0 style.

Why?

A big reason is that Facebook’s front page is so, so massively unreliable.  Despite having huge numbers of people that are my friends, clients, and contacts, it’s just not a good reading and writing service.  Facebook’s black box algorithm(s) may or may not present a given’s user’s post for reasons generally inscrutable.  I’ve missed friends’ news about new jobs, divorces, and deaths because the Zuckerbergmachine deems them unworthy of inclusion in my personalized river of news.  In turn, I have little sense of who will see my posts, so it’s hard to get responses and very hard to pitch my writing for an intended audience.  Together, this makes the FB experience sketchy at best.  To improve our use of it we have to turn to experiments and research that remind me of Cold War Kremlinology.

Twitter helps a bit, but not too much.  They’re running their own feed management software to some unknown degree.  Moreover, while I can use Twitter to find some good content hosted elsewhere (blog posts, articles, podcasts, videos, etc.) I still keep missing items un-tweeted, or at least un-tweeted by people I follow.  So Twitter is a flawed filter.

What else can I use to conduct research into the swiftly developing worlds of technology and education?  Some individual platforms let me follow content there or via email alerts (for example: WordPress, Medium, Tumblr), but that fragments the web and becomes unmanageable as the number of platforms grows.

People claim that RSS readers are history.  It’s popular to proclaim that blogs are dead.  I defy them all.

So I’m back to the sweet, open goodness of RSS reading*.  For the rest of this post I’ll describe my current setup.

In 2013 Google Reader died, and I and millions of others went on a quest for a successor.  For my primary research needs I settled on the Digg Reader, and haven’t changed since.  It’s free, reliable, cleanly designed, easy to use.  I run it on several laptops.  On my phone I make do with Feedly, which is pretty but not serious.

Here’s what my feed setup looks like now.  The list of feed categories, organized into folders, occupies the left (grey-ish) column.  Output from one of those folders, Futures and Futurists, runs down the left two-thirds of the screen:

Digg Reader sample screenshot

Let me break this down.

One strength of RSS is the way it lets users arrange feeds into whatever sequence makes sense to them.  I like clumping feeds into categories, then arranging those folders into an order that works for my day.

My RSS feeds, part 1

Starting off that order are feeds directly based on my work (see screenshot to left).  There’s a folder with output from my various blogs, so I can see what impression I’ve leaving, along with keyword searches for myself and my work.

Then there are folders for clients, broken down into different groups.  This way I can follow the progress of schools, organizations, governments, libraries, museums, and individuals I’ve helped and/or are currently working with.  As you can see from their placement in my workflow, they are a leading priority.   Some are represented here by organizational feeds, such as the Ithaka S&R blog.  Others appear through individual faculty members, librarians, or technologists.

Following that are feeds from Future Trends Forum guests.  That growing community is vital to my work, and I learn a great deal from these fine people.  Right after them come a set of futurists and other folks writing about the future (see compressed image up above): again, central to my work.

My RSS feeds - 2

Following that first group of folders (each containing a group of RSS feeds) comes another swarm.  This one is my main politics, economics, and environmental scan.  My readers know these huge trends play a major role in shaping both education and technology.

This begins with a survey of world news, from sources with a minimum of bias.  The Memeorandum trawl is a major force within this folder – and since that’s an aggregator, its results save me some time.  There are also several feeds for local (Vermont) politics, like the excellent web-based Vt Digger.

Then follow feed groups for economics, for environmental news, and for a loose category upon which I’ve slapped the label “information warfare” (some of which is actually about info ops, but also includes linked observations on culture and geopolitics).  Along with those folders are two dedicated to bias from the left and right.  Bloggers there instruct me on what the respective ideologies (and their branches: libertarian, feminist, socialist, etc.) are thinking, and also point me to news articles I might have otherwise missed.

I learn best when starting with a big picture, then drilling down into small units and more finely grained details, so this top-level section fits that mental stance.

my RSS feeds - 3

A third folder group follows, structured upon other dimensions of my research agenda.  Several trends and megatrends from FTTE get their folders here.  We begin with a daily reads list, which includes major publications (ex: Inside Higher Ed), several crucial bloggers (ex: Stephen Downes’ OLDaily), and several friends whose words mean a great deal to me both personally and professionally (ex: Alan Levine, Brian Lamb).

Next we get folders on higher education, libraries, technology, search, Google (because so important *and* so sprawling), and gaming (a rich and special interest).  Then two folders (because of so many blogs) on ed tech; one on MOOCs; one on gaming in education.

Following this third big section is a fourth one for fun and culture.  That has folders on Gothic literature, comedy, science fiction, books, friends with whom I do not have a professional connect, food, and music.  I’ll leave off a graphic for now, because they lead away from my research focus.

So that’s around 40 folders, and maybe 400 feeds.  Naturally I’ve curated these over time, and continue to add and subtract as we progress.

Does this giant pile and apparatus save me time?  Yes.  Instead of leaping from platform to platform, I just inhabit the Digg.  I don’t have to worry if Facebook has hidden someone’s latest, or if a story escaped people I follow on Twitter.

Yes, this is a lot of reading… but I’m a researcher and writer, and need this range of inputs.  We can’t do futures work without diversity and variety of sources.  Moreover, some repetition occurs across multiple feeds, which is itself useful.  I can look for different perspectives on the same story, while noting rising interest in a development as something potentially noteworthy as well.

There’s a politics here.  RSS reading is based on the open web, and I continue to fight for that, even in an age of rising silos and walled gardens.  Less clearly is a theme of conversation through connections, which is increasingly vital to me.  I love being able to arrange feeds across filter bubbles, and to see ideas move across boundaries.

I still use Twitter for professional reasons.  For whatever reason I can’t get professional discussions rolling on Facebook, but do manage to stir up good conversations on politics (!!), culture, and animals.

Is anyone else still using RSS?  Am I bonkers to do so?  Should I do a post like this about my Twitter setup?

(If I have time I’ll write about the foolishness of proclaiming blogs to be dead.)

*I wonder if I need to define RSS in 2017.  How many people will confuse the technological standard with this group?

from Bryan Alexander http://ift.tt/2u04mP9
via IFTTT

The History of the Pedometer (and the Problems with Learning Analytics)

read

These were my remarks as a guest speaker in Donna Murdoch’s class “Online Teaching and Learning – Applying Adult Learning Principles” this evening. I was asked to speak about learning analytics, but like I said in my keynote last week at NMC, ed-tech is boring. So this is a talk about pedometers.

“Know thyself” – this is an ancient maxim, of course. But it’s become not so much a philosophy of introspection or reflection but a compulsion for data collection and data analysis. We now live in a culture of quantification. (We have for a while now, no doubt.) All this is aided today, no doubt, by new computing technologies that create and collect massive amounts of personal data.

Learning analytics, in some ways, is a symptom of this data-driven culture – one that also is not new to education. Learning analytics are technologies that support and reflect the idea that we can collect and measure and analyze data about learners in order to know what they know, in order to optimize what and how they learn.

I want to invoke the guest speaker’s privilege and talk about something slightly different than what I was asked to speak about: that is, learning analytics. Now, I hope you’ll see that almost everything I say is very much related to learning analytics and to education technologies more broadly – to how we’re asked to hand over our personal data to various hardware and software companies, to our employers, to the government, to our schools under the guise of better “outcomes,” more productivity, and so on.

I want to talk a little bit about fitness trackers this evening.

“Wearables,” for what it’s worth, were featured in the 2016 Horizon Report for K–12, an annual report that predicts which education technologies are “on the horizon.” The “Quantified Self” appeared on the 2014 Horizon Report for Higher Education. In both cases, the Horizon Report predicted these technologies were four to five years from widespread adoption.

You hear these sorts of predictions all the time – that everyone is going to own or use X, Y, or Z technology in the next few years – but according to a recent study, only about 10% of Fitbit owners (and that’s of the less than 12% of US consumers own fitness trackers) are still wearing the device after a year.

Beware the marketing hype.

Like all technologies, fitness trackers have a history – one that certainly predates Fitbit or Jawbone or the Nike Fuelband.

There’s some debate about who invented the first pedometer, which remains a core functionality of most activity trackers: that is, counting how many steps one takes per day. Wikipedia lists three possible inventors: Leonardo da Vinci, who sketched the design for a gear-driven device with a pendulum arm that would swing back and forth with every walking leg motion and measure distance traveled; Abraham-Louis Perrelet, a Swiss inventor who built a self-winding watch in 1770 that wound when the wearer walked and then built another device, based on that watch, in 1777 that could measure walking distance; and Thomas Jefferson (Americans do like stories in which we feature prominently in the invention of things, don’t we), who purportedly brought the first pedometer to the US, although it’s not known if he ever improved on the design as he never filed any patents for his inventions. A website that reviews fitness devices also suggests that Jean Fernel, a French craftsman, might have invented the first pedometer in 1525 or Robert Hooke, an English scientist, might have in 1674, or Hubert Sarton, another Frenchman, might’ve in 1778. It was John Harwood, a British man, who was awarded the first patent for a pedometer in 1924. So even if we date pedometers from that patent, we’re still looking at about 100 years of history; if we credit da Vinci, we’re looking at about 500 years of pedometers.

500 years, and still less than 12% of Americans own a fitness tracker. Be a little skeptical of those who insist that technologies are changing faster than ever or that we’re adopting new technologies more quickly than ever before.

Now, it’s worth asking why so many inventors have been interested in the pedometer concept. For these men I’ve just named, at least, their interest was not in improving “fitness” per se but in measuring distance. For da Vinci, the device had military applications; he also imagined it would help improve mapping.

The promotion of the pedometer as a fitness device started in the 1960s when Dr. Yoshiro Hatano, a professor at the Kyushu University of Health and Welfare, undertook some applied research into exercise and calories. Concerned about the rise in obesity in Japan and wanting to promote and reinforce daily activity as part of “good health,” Hatano began selling a device known as “Manpo-kei” – the 10,000 steps meter. Hatano had calculated that the average Japanese person walked about 3500 to 5000 steps a day. By increasing the number of steps to 10,000 (roughly 5 miles), the amount of calories burned obviously would increase as well – up to about 500 calories a day, which could translate into about 20 kilos of weight loss in a year, he claimed. 10,000 steps was, according to the marketing for the Manpo-kei, ideal.

There are plenty of reasons to question that claim. 10,000 steps is less some medically-advised threshold than it is a marketing gimmick. Hatano could have picked 7500 steps or 13,333. 10,000 steps is a nice round number, one that will take you about 100 minutes of moderate activity to accomplish – but it’s also an arbitrary number. 10,000 steps is a goal that’s based on a lot of assumptions about bodies and activity and physical ability too. Nevertheless the number – and the connection between “steps” and “fitness” – has stuck with us for 50 some-odd years now. 10,000 – that’s the goal that almost all fitness trackers set for us.

And so, we can debate whether or not measuring “steps” is the same as measuring “fitness.” But we should ask too: How well do these devices actually track “steps”? (Rather, how accurate are they in counting “steps” and converting all our physical activity into “steps”?)

Surprise, surprise. They’re far from perfect. It depends on where you wear the device – on your wrist, in your bra, in your pocket, in your purse. It depends on what kind of activity you undertake. A study published in 2013 found that these devices tended to underestimate the energy expended while standing or bicycling or jogging uphill. And it depends on the device, the brand. A recent study from Stanford found that six out of seven wristband activity monitors measured heart rate with an error rate of less than 5%. Not too bad. But none of these monitors measured energy expended – a.k.a. calories – accurately. The most accurate fitness device was off by an average of 27%. Off, in other words, by roughly one McDonald’s Cheeseburger.

These errors are pretty important if you’re making decisions about your diet based on the data you glean from your fitness tracker– like should you have a McDonald’s Cheeseburger or another glass of wine. These errors are really important if someone else is making decisions about you based on this data – like your employer deciding whether your participation in the company wellness program is adequate. Or your health insurance company deciding whether to deny you coverage based on your physical activity or lack thereof. Or your school tracking how much you exercise and what you eat and how much (and where) you sleep and giving you a grade for it.

Oral Roberts University, for example, beginning in the spring of 2016, required its incoming students to wear a Fitbit and encouraged them to log their personal data in the learning management system.

Also in 2016, the University of Michigan signed a $170 million deal with Nike. One provision of the contract allows Nike “to harvest personal data from Michigan athletes through the use of wearable technology like heart-rate monitors, GPS trackers and other devices that log myriad biological activities.”

Are these examples of “learner data”? They’re certainly examples of “student data,” right?

Whose data does the data collected by a fitness tracker belong to? What do the Terms of Service say? (You’ve read the Terms of Service, right?) What else, in addition to how many steps a wearer has taken in a day, do these devices track? What does the fitness tracker maker use this data for? Who does the fitness tracker maker share the data with? Who does the fitness tracker maker sell the data to? How long does the company retain it? Can a user request a copy of their data? Can the user delete it? These aren’t medically-approved devices, of course, but what is being collected is, no doubt, sensitive health data. Is that data safe, secure, private? Are there any legal protections regarding this data – that is, does it count as part of someone’s “medical record”?

What are the implications when we compel people – through health insurance or through employment or through the learning management system – to be monitored in this way?

The marketing tells us that this sort of tracking should be done for our own good, for our health and well-being. We should want to track and be tracked. The science? Well, the science, not so much. Indeed, one study published last year in the journal of the American Medical Association, found that those who wore fitness trackers lost less weight than those who did not.

Yes, that’s just one study. I hear a lot of people say – anecdotal data – that they like their fitness tracker because it motivates them to move. They say they like the “gamification” of exercise – earning points and badges, sharing their efforts via social media, and so on. They insist they need this extrinsic motivation as their intrinsic motivation simply isn’t enough. Not 10,000 steps worth of enough, that is.

And Americans have been tracking calories for quite some time now. Again, there’s a history here – why the calorie is the unit of measurement. Like the invention of the pedometer, there are many origin stories we could tell here – the development of the science of human nutrition in the early twentieth century. I’ll give you one name (because I’ve only mentioned men so far): Lulu Hunt Peters, an American doctor, who published the bestselling diet book Diet & Health: With Key to the Calories in 1918 and who popularized the idea that if you counted calories, you can lose weight.

500 years of pedometers. 100 years of counting calories. 50 years of connecting “steps” and “fitness.” Today’s fitness tracker isn’t new, but rather fits quite neatly into a long social and technological history. We are very accustomed to the stories about measuring these data-points for the sake of our personal health and well-being. There’s a cultural logic to the fitness tracker.

Of course, as the familiar saying (often misattributed to Einstein) goes, “Not everything that counts can be counted, and not everything that can be counted counts.”

Is this meaningful data? Are “steps” or “calories” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “health”? How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “good health”?

Those are questions we should consider regarding fitness trackers, sure. But they’re questions for all sorts of technologies – education and otherwise.

Please ask these questions when you hear the marketing for “learning analytics.” I’m going to re-state that previous paragraph:

Is this meaningful data? Are “test scores” or “grades” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “learning”? How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “a good student” or “a good teacher” or “a good education”?

Are learning analytics (or your fitness tracker) a way you can “know thyself”?

from Hack Education http://ift.tt/2rIePd2
via IFTTT

Thrill Ride

It is evident that if an experience is extra-fun or especially if it is meaningful I will probably not blog about it… A memorable stretch in the UK for OER17 and more is the most recent example. And there is every chance I will never properly capture what I dug about this past week at the Domains Conference.

I commiserated on this syndrome with Keegan Long-Wheeler, and we made something of a pact to not let this happen here… That we would try to throw out a few bite-sized chunks and not obsess about nailing all the nuances. He’s doing it, so I’m gonna do it at least once too.

There were a lot of moments this week where I felt as if there was no point in time or space that I would rather be. This was one of them, during which I had the uncharacteristic presence of mind to pull out my phone and take a pic:

This was as the conference was winding down, with Dr. Jones spinning tunes, and that is Martha Burtis and Tom Woodward riffing, and Alan Levine riding the wave and chiming in…

To get a sense of why this felt so wild, I would urge you to check out Martha’s magnificent keynote, and Tom’s epic reveal of how he manages Ram Pages and of so much else. Both merit long thoughtful dives in themselves, but [he repeats to himself, “keep it small Brian”] gotta keep this one focused, keep it out of the draft folder mausoleum…

If I recall this moment correctly, Martha was talking about the implications of the WordPress REST API, suggesting we think of WordPress less as a publishing platform and more as something like an operating system for the web as platform. This was while Tom and Martha were trading licks like ace guitarists… When one of them would say to the other, “oh, you’ll like this…” you knew something cool was coming.

I tried in vain to keep up, but the best I could do was capture some URLs and email them to myself. I’ll reproduce those below… I intend to pursue this stuff further, if anyone has similar things they’d like to share or tips on how I can begin to understand them better, I’d be most eager.

I hope my glosses aren’t too inaccurate or embarrassing.

Digital Histology: Tom blogged about it here. What seemed notable was some slick work to make the integrated annotated images flow so smoothly and load so fast. This is a common theme in much of Tom’s current work… using the WP API and JSON to supercharge performance.

Georgetown Domains: this was also demo’ed during a fantastic session Tom co-presented with Marie Selvanadin and Yianna Vovides of Georgetown. The approach for assembling material from across many sites using the API/JSON offers huge speed improvements from existing syndication approaches such as FeedWordPress, and seems amazingly flexible and extensible… I kept having EduGlu flashbacks…

Martha shared particles.js, a JavaScript library… Play with those variable sliders on the right and enjoy the trip…

Data Visualization for an ecology course… exploring “the value of hypothesis visualizations — hypotheses that are more than words — as a way of considering how to analyze and visualize data.”

MathBox… more JavaScript work that takes it to the next level… Go deep on this one, there is lots and lots here. Not least this amazing presentation.

All this and lots more went by in what felt like a few dizzying moments, it was an ed tech thrill ride, my head swimming with possibilities, wondering how I could learn from and draw on this stuff, share it with our team at TRU, or the nascent BC Open Ed Tech Co-op. We’ll see what happens.

 

from abject learning http://ift.tt/2s9fl7S
via IFTTT

The Histories of Personalized Learning

I delivered this talk today at the OEB MidSummit conference in Reykjavik, Iceland

I recognize that the theme of this conference is “shaping the future of learning” but I want to talk a little bit about the past. I want us to think about the ways in which the history of learning – how we tell that story – shapes the future of learning, and how the history of technology (education technology and otherwise) – and how we tell that story – shapes the future of technology. I want us to recognize there is a history even in the face of a fervent insistence that new, digital technologies are poised to sweep away traditional institutions and traditional practices. You know the stories: revolutions and disruptive innovations and other millennialist mythologies: the end of history, the end of work, the end of college, and so on.

You hear a lot of these sorts of proclamations when it comes to “personalized learning,” which is (increasingly) frequently invoked in direct opposition to some imagined or invented version of learning in the present or in the past. Education technologists and futurists (and pundits and politicians) like to provide these thumbnail sketches about what schooling has been like – unchanged for hundreds or thousands of years, some people (who are clearly not education historians) will try to convince you. They do so in order to make a particular point about their vision for what learning should be like. “The factory model of education” – this is the most common one – serves as a rhetorical and political foil against which reforms and technological interventions can be positioned. These sorts of sketches and catchphrases never capture the complex history of educational practices or institutions. (They’re not meant to. They’re slogans, not scholarship.) Nevertheless these imagined histories are often quite central to the premise that education technology is different and disruptive and new and, above all, necessary.

There is no readily agreed upon meaning of the phrase “personalized learning,” which probably helps its proponents wield these popularized tales about the history of education and then in turn laud it – “personalized learning,” whatever that is – as an exciting, new corrective to the ways they claim education has “traditionally” functioned (and in their estimation, of course, has failed).

“Personalized learning” can mean that students “move at their own pace” through lessons and assignments, for example, unlike those classrooms where everyone is expected to move through material together. (In an invented history of education, this has been the instructional arrangement for all of history.) Or “personalized learning” can mean that students have a say in what they learn – students determine topics they study and activities they undertake. “Personalized learning,” according to some definitions, is driven by students’ own interests and inquiry rather than by the demands or standards imposed by the instructor, the school, the state. “Personalized learning,” according to other definitions, is driven by students’ varied abilities or needs; it’s a way of navigating the requirements of school bureaucracies and requesting appropriate accommodations – “individualized education plans” and the like. Or “personalized learning” is the latest and greatest – some new endeavor that will be achieved, not through human attention or agency or through paperwork or policy but through computing technologies. That is, through monitoring and feedback, through automated assessment, and through the programmatic presentation of new or next materials to study.

“Personalized learning,” depending on how you define it, dates back to Rousseau. Or it dates back further still – to Alexander the Great’s tutor, some guy named Aristotle. It dates to the nineteenth century. Or to the twentieth century. It dates to the rise of progressive education theorists and practitioners. To John Dewey. Or to Maria Montessori. Or it dates to the rise of educational psychology. To B. F. Skinner. To Benjamin Bloom. It dates to special education-related legislation passed in the 1970s or to the laws passed the 1990s. Or it dates to computer scientist Alan Kay’s 1972 essay “A Personal Computer for Children of All Ages.” Or it dates to the Gates Foundation’s funding grants and political advocacy in the early 2000s. Take your pick. (Take your pick. Reveal your politics.)

I want to talk to you today about the history of personalized learning – in no small part because it’s taken on such political and financial and rhetorical significance. Andrew Keen alluded to this yesterday in his remarks about the efforts of Silicon Valley’s philanthro-venture-capitalism in shaping the future of education. Bill Gates and Mark Zuckerberg, for example, are plowing billions of dollars into “personalized learning” products and school reforms. That seems significant – particularly if we don’t understand or agree on what the phrase actually means. (That means, it seems likely, that these billionaires get to decide, not progressive educators.)

So, where did this concept of “personalized learning” originate? Who has propagated it? When? Why? How has the meaning of the phrase changed over time? That’s a lot to do in a 20 minute talk, so I’m going to offer you several histories, origins, and trajectories of “personalization” more broadly – as a cultural not just technological or pedagogical practice.

The OED dates the word “personalization” in print to the 1860s, but the definition that’s commonly used today – “The action of making something personal, or focused on or concerned with a certain individual or individuals; emphasis on or attention to individual persons or personal details” – dates to the turn of the twentieth century, to 1903 to be precise. “Individualization,” according to the OED, is much older; its first appearance in print was in 1746.

The Google Ngram Viewer, which is also based on material in print, suggests the frequency of these two terms’ usage – “individualization” and “personalization” – looks something like this:

In the late twentieth century, talk of “individualization” gave way to “personalization.” Why did our language shift? What happened circa 1995? (I wonder.)

Now, no doubt, individualism has been a core tenet of the modern era. It’s deeply enmeshed in Western history (and in American culture and identity in particular). I always find myself apologizing at some point that my talks are so deeply US-centric. But I contend you cannot analyze digital technologies and the business and politics of networks and computers without discussing how deeply embedded they are in what I’ve called the “Silicon Valley narrative” and in what others have labeled the “California ideology” – and that’s an ideology that draws heavily on radical individualism and on libertarianism.

It’s also an ideology – this “Silicon Valley narrative” – that is deeply intertwined with capitalism – contemporary capitalism, late-stage capitalism, global capitalism, venture capitalism, surveillance capitalism, whatever you prefer to call it.

Indeed, we can see “personalization” as both a product (and I mean quite literally a product) of and a response to the rise of post-war consumer capitalism. Monograms on mass-produced objects. Millions of towels and t-shirts and trucks and tchotchkes that are all identical except you can buy one with your name or your initials printed on it. “Personalization” acts as some sort of psychological balm, perhaps, to standardization.

A salve. Not a solution.

But “personalization” is not simply how we cope with our desire for individuality in an age of mass production, of course. It’s increasingly how we’re sold things. It’s how we are profiled, how we are segmented, how we are advertised to.

Here’s Wikipedia’s introduction to its entry on “personalization,” which I offer not because it’s definitive in any way but because it’s such a perfect encapsulation of how Internet culture sees itself, sees its history, tells its story, rationalizes its existence, frames its future:

Personalization, sometimes known as customization, consists of tailoring a service or product to accommodate specific individuals, sometimes tied to groups or segments of individuals. A wide variety of organizations use personalization to improve customer satisfaction, digital sales conversion, marketing results, branding, and improved website metrics, as well as for advertising.

How much of “personalized learning” as imagined and built and sold by tech companies is precisely this: metrics, marketing, conversion rates, customer satisfaction? (They just use different words, of course: “outcomes-based learning,” “learning analytics.”)

Online, “personalization” is how we – we the user and we the consumer as, let’s be clear, those are the frames – are convinced to take certain actions, buy certain products, click on certain buttons, see certain information (that is to say, learn certain things). “Personalization” is facilitated by the pervasive collection of data, which is used to profile and segment us. We enable this both by creating so much data (often unwittingly) and surrendering so much data (often voluntarily) when we use new, digital technologies. “The personal computer” and such.

(You know it’s “personal.” You get to change the background image. It’s “personalized,” just like that Coke bottle.)

The personal computer first emerged as a consumer product in the 1970s – decades after educational technologists and educational psychologists had argued that machines could “personalize” (or at the time, “individualize”) education.

Among these first teaching machines was the one built by Ohio State University psychology professor Sidney Pressey. His device, “the Automatic Teacher,” was constructed out of typewriter parts. He debuted it at the 1924 American Psychological Association meeting. A little window displayed a multiple choice question, and the student could press one of four keys to select the correct answer. The machine could be used to test a student – that is, to calculate how many right answers were chosen overall; or it could be used to “teach” – the next question would not be revealed until the student got the first one right, and a counter would keep track of how many tries it took.

The “Automatic Teacher” wasn’t Pressey’s first commercial move. In 1922 he and his wife published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

Yes, standardized testing had already become commonplace (in the American classroom at least) by the 1920s, and this practice placed a significant burden upon those teachers and clerks tasked with scoring them. Pressey argued that the automation of testing could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – it should free her for real teaching of the inspirational.” No doubt, these arguments echo those made today about how ed-tech will free the teacher for more individualized attention, instruction, and remediation.

But I think Pressey’s work also serves to underscore this other tension that we find throughout the twentieth century. This isn’t simply about “labor-saving devices” or instructional or administrative efficiency. The “Automatic Teacher” was also a technology of individualization, one that Pressey and others since have insisted was necessitated by the practices and systems of standardization in schools, by the practices and systems of mass education itself.

It’s significant, I think, that early teaching machines were developed by psychologists and justified by psychology – very much a science of the twentieth century. After all, psychology – as a practice, as a system – helped to define and theorize the individual, “the self.” Self-management. Self-reflection. Self-help. Self-control.

Individualization through teaching machines is therefore a therapeutic and an ideological intervention, one that’s supposed to act as a salve in a system of mass education. And this has been the project of education technology throughout the twentieth century.

I recognize that I put “pigeons” in the title of this talk and I haven’t yet made the connection between the history of personalization and the history of pigeon training. It’s there in the history of educational psychology, in the history of behavioral modification, in the history of teaching machines. But I opted to scrap the ending I’d originally written for this talk – one that, I promise, tied it all together. Instead of the pigeons of ed-tech, I feel compelled to end with some thoughts on the politics of ed-tech.

Institutions face an enormous crisis today – one of credibility and trust, one that Chris Hayes identified in 2012 in his book Twilight of the Elites. He argued that

We now operate in a world in which we can assume neither competence nor good faith from the authorities, and the consequences of this simple, devastating realization is the defining feature of American life at the end of this low, dishonest decade. Elite failure and the distrust it has spawned is the most powerful and least understood aspect of current politics and society. It structures and constrains the very process by which we gather facts, form opinions, and execute self-governance. It connects the Iraq War and the financial crisis, the Tea Party and MoveOn, the despair of laid-off autoworkers in Detroit to the foreclosed homeowners in Las Vegas and the residents of the Lower Ninth Ward in New Orleans: nothing seems to work. All the smart people fucked up, and no one seems willing to take responsibility.

We can add to Haye’s list, of course, more recent events: Brexit and Donald Trump and the latter’s withdrawal last week from the Paris Climate Accord. They can’t even get the weather report right, the President of the United States of America reportedly quipped to friends over golf; why should we trust climate scientists? This “death of expertise” has profound implications, no doubt, for the future of education, scholarship, teaching and learning, democracy. And, as Andrew Keen observed yesterday, we must consider the ways in which “populism” and “personalization” as cultural and political and economic forces might actually be intertwined – how the algorithmically-driven Facebook’s News Feed, most obviously, has only served to make things worse.

A journalist recently asked the US Secretary of Education about different rates of discipline for students of color and students with disabilities, and if this was a problem her office intended to address. Addressing the racial disparities in school discipline – and addressing this as a civil rights issue – had been a major focus of the Obama Administration’s final few months. Betsy DeVos responded, “I think that every student, every individual is unique and special and we need to be really intent on focusing on the needs of each individual student.”

For DeVos – and for many, many others – “personalized learning” means just this: “we need to be really intent on focusing on the needs to each individual student.” The needs of the individual to the benefit of the individual. But to DeVos – and to many, many others – exalting the freedom of the individual here also means freedom from government control (from government control over the education system). It’s not freedom from corporations, oh no; it’s freedom from the state and more explicitly freedom from the regulations that have been put in place in the last sixty years to try to force educational institutions to be more equitable. We heard Donald Clark argue yesterday that schools need to become unsafe spaces again, but let’s recognize that schools have never been “safe spaces” for most of the people on this planet.

When Betsy Devos and others say that “we need to be really intent on focusing on the needs to each individual student,” what she doesn’t add is that all risk, in this worldview, would fall on the individual as well, of course. In a world with no institutions – unbundled and disintermediated as Silicon Valley is clearly keen to do – there are no institutional protections. With no government oversight, there is no appeal to civil rights.

So this is our challenge in the face of those calling for “personalized learning” – the Betsy DeVoses and the Mark Zuckerbergs. And it’s our challenge, not only in education technology, but in democracies more generally: can we maintain a shared responsibility for one another when institutions are dismantled and disrupted? Will we have any semblance of collective justice in a “personalized,” algorithmically-driven world?

from Hack Education http://ift.tt/2t1Vx2S
via IFTTT