I’ll Take That Text Extra Large: Bigify It

Some people like to spend their long airplane time watching movies or playing phone games. That’s fine. Me, I usually sleep, but on my cross country flight yesterday I wanted to try building a little HTML/jQuery tool.

Because it’s something I can use this week.

Because I can (running in my laptop’s localhost).

Well almost. I lacked the internet to look up more of the code I got wrong. But I got about 80% there.

Presenting… Bigify. The thing it does?

There is a small form field at the bottom; whatever is typed or pasted there is “bigified” on the main display.

Stunning, eh?

I have a few days this week of hand on web workshops; I won’t be doing slides, mostly web demos. There is this part when I want to have everyone go to the same site, I will make shirt URLs, but still, it seems useful to display them as Big On Screen as Possible.

There are a number of jQuery code bits to dynamically resize text to a window size, I decided to try FitText, that is what I downloaded before my trip. I also made sure I had a local copy of jQuery as served by Google just so I could play while not connected.

There’s a lot of room for improvement; I wanted to make the form field disappear when not being edited, but the show/hide jQuery worked to hide, but never to show (same with CSS hover tricks, maybe it’s the absolute positioning??). Maybe it could use query parameters to save pre-configured strings as URLs. Maybe some options for different display colors (light text a black background for contrast??).

Got ideas? Fork this thing!

Who knows? I have a flight home Wednesday, maybe I will do an update.


Thank for John Johnston or forking and putting into play better interface- a form field element that replaces the display text, so it feels like editing it right in place.

SOme more tweaks coming output, but this now uses a better fonf on mobile devices (I hope).

Featured Image: Large Order Of Toast flickr photo by JD Hancock shared under a Creative Commons (BY) license

from CogDogBlog http://ift.tt/2w4GutU


Visualizing future trends for education and technology

With the help of Future Trends in Technology and Education friends and Patreon supporters, we now have a first FTTE infographic.

The idea was to organize all of the 85+ trends the report tracks into a single image. This first design is aimed at appearing as one page, such as for a workshop handout.

FTTE visualization

The heart of it is the group of three main columns, which contain the bulk of FTTE content.  The very top contains the higher ed crisis or bubble trends; they appear up there because they rest on other trends, like pillars.  I showed the connection between specific technologies as they appear in the world and their educational instances (3d printing, digital video, etc) by aligning them up within a colored box.

Each trend contains countervailing trends as well.

Later I’d like to edit and compress it down to smaller sizes, as for a card.  That would most likely involve combining trends into rubrics or mega-trends, like piling VR, AR, and MR together.  I can also turn this into an interactive object, with links from each trend.

What do you think?

from Bryan Alexander http://ift.tt/2ubmINB

Information Underload

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

Take, for instance, the latest news on Watson. Watson, you might remember, was IBM’s former AI-based Jeopardy winner that was going to go from “Who is David McCullough?” to curing cancer.

So how has this worked out? Four years later, Watson has yet to treat a patient. It’s hit a roadblock with some changes in backend records systems. And most importantly, it can’t figure out how to treat cancer because we don’t currently have enough good information on how to treat cancer:

“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.”

This is not just the case with cancer, of course. You’ve heard about the reproducibility crisis, right? Most published research findings are false. And they are false for a number of reasons, but primary reasons include that there are no incentives for researchers to check the research, that data is not shared, and that publications aren’t particularly interested in publishing boring findings. The push to commercialize university research has also corrupted expertise, putting a thumb on the scale for anything universities can license or monetize.

In other words, there’s not enough information out there, and what’s out there is generally worse than it should be.

You can find this pattern in less dramatic areas as well — in fact, almost any place that you’re told big data and analytics will save us. Take Netflix as an example. Endless thinkpieces have been written about the Netflix matching algorithm, but for many years that algorithm could only match you with the equivalent of the films in the Walmart bargain bin, because Netflix had a matching algorithm but nothing worth watching. (Are you starting to see the pattern here?)

In this case at least, the story has a happy ending. Since Netflix is a business and needs to survive, they decided not to pour the majority of their money into newer algorithms to better match people with the version of Big Momma’s House they would hate the least. Instead, they poured their money into making and obtaining things people actually wanted to watch, and as a result Netflix is actually useful now. But if you stick with Netflix or Amazon Prime today it’s more likely because you are hooked on something they created than that you are sold on the strength of their recommendation engine.

Let’s belabor the point: let’s talk about Big Data in education. It’s easy to pick on MOOCs, but remember that the big value proposition of MOOCs was that with millions of students we would finally spot patterns that would allow us to supercharge learning. Recommendation engines would parse these patterns, and… well, what? Do we have a bunch of superb educational content just waiting in the wings that I don’t know about? Do we even have decent educational research that can conclusively direct people to solutions? If the world of cancer research is compromised, the world of educational research is a control group wasteland.

We see this pattern again and again — companies coming along to tell us that their platform will help us with the firehose of content. But the big problem is not that it’s a firehose, but that it’s a firehose of sewage. It’s all haystack and no needle. And the reason this happens again and again is that what we so derisively call “content” nowadays is expensive to produce, and gets produced by a large number of well-paid people who in general have no significant marketing arm. To scale up that work is to employ a lot of people, but it doesn’t change your return on investment ratio. To make a dollar, you need to spend ninety cents, and that doesn’t change no matter how big you get. And who wants to spend ninety cents to make a dollar in today’s world?

Processing and promotion platforms, however, like Watson or MOOCs or Facebook, offer the dream of scalability, where there is zero marginal cost to expansion. They also offer the potential of monopoly and lock-in, to drive out competitors. And importantly, that dream drives funding which drives marketing which drives hype.

And this is why there is endless talk about the latest needle in a haystack finder, when what we are facing is a collapse of the market that funds the creation of needles. Netflix caught on. Let’s hope that the people who are funding cancer research and teaching students get a clue soon as well. More money to the producers of valuable content. Less to platforms, distributors, and needle-finders. Do that, and the future will sort itself out.

I’m guessing if you are reading this you already know this, but if you are interested in this stuff, make sure to read Audrey Watters’ This Week In Robots religiously, as  well her writing in this area, which has been very influential on me.



from Hapgood http://ift.tt/2vjxCOk

Moodle and the next LMS: reflections and more questions

Last week Martin Dougiamas, the creator of Moodle, the world’s leading open source learning management system, joined our Future Trends Forum to discuss the future of that technology.

Someone on the Moodle.com site wrote up an extraordinarily rich report on the session, complete with numerous screen captures.  Bravo!

The full video recording is now available on YouTube:

During the hour Forum participants offered many questions.  Martin tackled a slew of them, yet still more came in.  I’d like to reproduce them here, edited very lightly.  They indicate the richness of today’s unfolding LMS discussion, and help illuminate where things might be headed.

Text question from Ed Finn: OER Game Changer – Moodle Community could separate it from Canvas and Blackboard who keep these types of sharing resources behind a wall.  I see it as similar to the Canvas Commons for resources with a social media component?

Text question from Sonja Strahl: Community question – Will the community you were discussing be available for only those with Moodle as their LMS (for both creation of OER and use of OER)? Or will it be open to everyone, and under Creative Commons license?  

Text question from Richard Wack: Accessibility – Blackboard recently acquired Ally which appears to be a very impressive tool to address accessibility as it pertains to courses. What is the present and future direction by Moodle on this important topic? Thank you. 

Text question from Josh: Anti-LMS – How do you respond to the anti-LMS pedagogy voices, even those who might object to an open-source tool like Moodle?  Does that debate interest you at all?

Text question from Ed Finn: Communication – Just out of curiosity, does Moodle offer social media, text and other communication coordination?  I know that Canvas has a rich development here where you can choose to communicate by email, tweet, text, or app.

Text question from Ed Finn: Versioning – What are your thoughts on different versions of Moodle and sharing between them?  How do you see the community working with multiple versions? 

What are you wondering about Moodle and the LMS, looking ahead?

My thanks to Martin and the Forum community for their generous time and thought.


from Bryan Alexander http://ift.tt/2ubV1RE

I defy the world and go back to RSS

It may be perverse, but in this age of Facebook (now 2 billion strong) I’ve decided to rededicate myself to RSS reading.  That’s right: old school, Web 2.0 style.


A big reason is that Facebook’s front page is so, so massively unreliable.  Despite having huge numbers of people that are my friends, clients, and contacts, it’s just not a good reading and writing service.  Facebook’s black box algorithm(s) may or may not present a given’s user’s post for reasons generally inscrutable.  I’ve missed friends’ news about new jobs, divorces, and deaths because the Zuckerbergmachine deems them unworthy of inclusion in my personalized river of news.  In turn, I have little sense of who will see my posts, so it’s hard to get responses and very hard to pitch my writing for an intended audience.  Together, this makes the FB experience sketchy at best.  To improve our use of it we have to turn to experiments and research that remind me of Cold War Kremlinology.

Twitter helps a bit, but not too much.  They’re running their own feed management software to some unknown degree.  Moreover, while I can use Twitter to find some good content hosted elsewhere (blog posts, articles, podcasts, videos, etc.) I still keep missing items un-tweeted, or at least un-tweeted by people I follow.  So Twitter is a flawed filter.

What else can I use to conduct research into the swiftly developing worlds of technology and education?  Some individual platforms let me follow content there or via email alerts (for example: WordPress, Medium, Tumblr), but that fragments the web and becomes unmanageable as the number of platforms grows.

People claim that RSS readers are history.  It’s popular to proclaim that blogs are dead.  I defy them all.

So I’m back to the sweet, open goodness of RSS reading*.  For the rest of this post I’ll describe my current setup.

In 2013 Google Reader died, and I and millions of others went on a quest for a successor.  For my primary research needs I settled on the Digg Reader, and haven’t changed since.  It’s free, reliable, cleanly designed, easy to use.  I run it on several laptops.  On my phone I make do with Feedly, which is pretty but not serious.

Here’s what my feed setup looks like now.  The list of feed categories, organized into folders, occupies the left (grey-ish) column.  Output from one of those folders, Futures and Futurists, runs down the left two-thirds of the screen:

Digg Reader sample screenshot

Let me break this down.

One strength of RSS is the way it lets users arrange feeds into whatever sequence makes sense to them.  I like clumping feeds into categories, then arranging those folders into an order that works for my day.

My RSS feeds, part 1

Starting off that order are feeds directly based on my work (see screenshot to left).  There’s a folder with output from my various blogs, so I can see what impression I’ve leaving, along with keyword searches for myself and my work.

Then there are folders for clients, broken down into different groups.  This way I can follow the progress of schools, organizations, governments, libraries, museums, and individuals I’ve helped and/or are currently working with.  As you can see from their placement in my workflow, they are a leading priority.   Some are represented here by organizational feeds, such as the Ithaka S&R blog.  Others appear through individual faculty members, librarians, or technologists.

Following that are feeds from Future Trends Forum guests.  That growing community is vital to my work, and I learn a great deal from these fine people.  Right after them come a set of futurists and other folks writing about the future (see compressed image up above): again, central to my work.

My RSS feeds - 2

Following that first group of folders (each containing a group of RSS feeds) comes another swarm.  This one is my main politics, economics, and environmental scan.  My readers know these huge trends play a major role in shaping both education and technology.

This begins with a survey of world news, from sources with a minimum of bias.  The Memeorandum trawl is a major force within this folder – and since that’s an aggregator, its results save me some time.  There are also several feeds for local (Vermont) politics, like the excellent web-based Vt Digger.

Then follow feed groups for economics, for environmental news, and for a loose category upon which I’ve slapped the label “information warfare” (some of which is actually about info ops, but also includes linked observations on culture and geopolitics).  Along with those folders are two dedicated to bias from the left and right.  Bloggers there instruct me on what the respective ideologies (and their branches: libertarian, feminist, socialist, etc.) are thinking, and also point me to news articles I might have otherwise missed.

I learn best when starting with a big picture, then drilling down into small units and more finely grained details, so this top-level section fits that mental stance.

my RSS feeds - 3

A third folder group follows, structured upon other dimensions of my research agenda.  Several trends and megatrends from FTTE get their folders here.  We begin with a daily reads list, which includes major publications (ex: Inside Higher Ed), several crucial bloggers (ex: Stephen Downes’ OLDaily), and several friends whose words mean a great deal to me both personally and professionally (ex: Alan Levine, Brian Lamb).

Next we get folders on higher education, libraries, technology, search, Google (because so important *and* so sprawling), and gaming (a rich and special interest).  Then two folders (because of so many blogs) on ed tech; one on MOOCs; one on gaming in education.

Following this third big section is a fourth one for fun and culture.  That has folders on Gothic literature, comedy, science fiction, books, friends with whom I do not have a professional connect, food, and music.  I’ll leave off a graphic for now, because they lead away from my research focus.

So that’s around 40 folders, and maybe 400 feeds.  Naturally I’ve curated these over time, and continue to add and subtract as we progress.

Does this giant pile and apparatus save me time?  Yes.  Instead of leaping from platform to platform, I just inhabit the Digg.  I don’t have to worry if Facebook has hidden someone’s latest, or if a story escaped people I follow on Twitter.

Yes, this is a lot of reading… but I’m a researcher and writer, and need this range of inputs.  We can’t do futures work without diversity and variety of sources.  Moreover, some repetition occurs across multiple feeds, which is itself useful.  I can look for different perspectives on the same story, while noting rising interest in a development as something potentially noteworthy as well.

There’s a politics here.  RSS reading is based on the open web, and I continue to fight for that, even in an age of rising silos and walled gardens.  Less clearly is a theme of conversation through connections, which is increasingly vital to me.  I love being able to arrange feeds across filter bubbles, and to see ideas move across boundaries.

I still use Twitter for professional reasons.  For whatever reason I can’t get professional discussions rolling on Facebook, but do manage to stir up good conversations on politics (!!), culture, and animals.

Is anyone else still using RSS?  Am I bonkers to do so?  Should I do a post like this about my Twitter setup?

(If I have time I’ll write about the foolishness of proclaiming blogs to be dead.)

*I wonder if I need to define RSS in 2017.  How many people will confuse the technological standard with this group?

from Bryan Alexander http://ift.tt/2u04mP9

The History of the Pedometer (and the Problems with Learning Analytics)


These were my remarks as a guest speaker in Donna Murdoch’s class “Online Teaching and Learning – Applying Adult Learning Principles” this evening. I was asked to speak about learning analytics, but like I said in my keynote last week at NMC, ed-tech is boring. So this is a talk about pedometers.

“Know thyself” – this is an ancient maxim, of course. But it’s become not so much a philosophy of introspection or reflection but a compulsion for data collection and data analysis. We now live in a culture of quantification. (We have for a while now, no doubt.) All this is aided today, no doubt, by new computing technologies that create and collect massive amounts of personal data.

Learning analytics, in some ways, is a symptom of this data-driven culture – one that also is not new to education. Learning analytics are technologies that support and reflect the idea that we can collect and measure and analyze data about learners in order to know what they know, in order to optimize what and how they learn.

I want to invoke the guest speaker’s privilege and talk about something slightly different than what I was asked to speak about: that is, learning analytics. Now, I hope you’ll see that almost everything I say is very much related to learning analytics and to education technologies more broadly – to how we’re asked to hand over our personal data to various hardware and software companies, to our employers, to the government, to our schools under the guise of better “outcomes,” more productivity, and so on.

I want to talk a little bit about fitness trackers this evening.

“Wearables,” for what it’s worth, were featured in the 2016 Horizon Report for K–12, an annual report that predicts which education technologies are “on the horizon.” The “Quantified Self” appeared on the 2014 Horizon Report for Higher Education. In both cases, the Horizon Report predicted these technologies were four to five years from widespread adoption.

You hear these sorts of predictions all the time – that everyone is going to own or use X, Y, or Z technology in the next few years – but according to a recent study, only about 10% of Fitbit owners (and that’s of the less than 12% of US consumers own fitness trackers) are still wearing the device after a year.

Beware the marketing hype.

Like all technologies, fitness trackers have a history – one that certainly predates Fitbit or Jawbone or the Nike Fuelband.

There’s some debate about who invented the first pedometer, which remains a core functionality of most activity trackers: that is, counting how many steps one takes per day. Wikipedia lists three possible inventors: Leonardo da Vinci, who sketched the design for a gear-driven device with a pendulum arm that would swing back and forth with every walking leg motion and measure distance traveled; Abraham-Louis Perrelet, a Swiss inventor who built a self-winding watch in 1770 that wound when the wearer walked and then built another device, based on that watch, in 1777 that could measure walking distance; and Thomas Jefferson (Americans do like stories in which we feature prominently in the invention of things, don’t we), who purportedly brought the first pedometer to the US, although it’s not known if he ever improved on the design as he never filed any patents for his inventions. A website that reviews fitness devices also suggests that Jean Fernel, a French craftsman, might have invented the first pedometer in 1525 or Robert Hooke, an English scientist, might have in 1674, or Hubert Sarton, another Frenchman, might’ve in 1778. It was John Harwood, a British man, who was awarded the first patent for a pedometer in 1924. So even if we date pedometers from that patent, we’re still looking at about 100 years of history; if we credit da Vinci, we’re looking at about 500 years of pedometers.

500 years, and still less than 12% of Americans own a fitness tracker. Be a little skeptical of those who insist that technologies are changing faster than ever or that we’re adopting new technologies more quickly than ever before.

Now, it’s worth asking why so many inventors have been interested in the pedometer concept. For these men I’ve just named, at least, their interest was not in improving “fitness” per se but in measuring distance. For da Vinci, the device had military applications; he also imagined it would help improve mapping.

The promotion of the pedometer as a fitness device started in the 1960s when Dr. Yoshiro Hatano, a professor at the Kyushu University of Health and Welfare, undertook some applied research into exercise and calories. Concerned about the rise in obesity in Japan and wanting to promote and reinforce daily activity as part of “good health,” Hatano began selling a device known as “Manpo-kei” – the 10,000 steps meter. Hatano had calculated that the average Japanese person walked about 3500 to 5000 steps a day. By increasing the number of steps to 10,000 (roughly 5 miles), the amount of calories burned obviously would increase as well – up to about 500 calories a day, which could translate into about 20 kilos of weight loss in a year, he claimed. 10,000 steps was, according to the marketing for the Manpo-kei, ideal.

There are plenty of reasons to question that claim. 10,000 steps is less some medically-advised threshold than it is a marketing gimmick. Hatano could have picked 7500 steps or 13,333. 10,000 steps is a nice round number, one that will take you about 100 minutes of moderate activity to accomplish – but it’s also an arbitrary number. 10,000 steps is a goal that’s based on a lot of assumptions about bodies and activity and physical ability too. Nevertheless the number – and the connection between “steps” and “fitness” – has stuck with us for 50 some-odd years now. 10,000 – that’s the goal that almost all fitness trackers set for us.

And so, we can debate whether or not measuring “steps” is the same as measuring “fitness.” But we should ask too: How well do these devices actually track “steps”? (Rather, how accurate are they in counting “steps” and converting all our physical activity into “steps”?)

Surprise, surprise. They’re far from perfect. It depends on where you wear the device – on your wrist, in your bra, in your pocket, in your purse. It depends on what kind of activity you undertake. A study published in 2013 found that these devices tended to underestimate the energy expended while standing or bicycling or jogging uphill. And it depends on the device, the brand. A recent study from Stanford found that six out of seven wristband activity monitors measured heart rate with an error rate of less than 5%. Not too bad. But none of these monitors measured energy expended – a.k.a. calories – accurately. The most accurate fitness device was off by an average of 27%. Off, in other words, by roughly one McDonald’s Cheeseburger.

These errors are pretty important if you’re making decisions about your diet based on the data you glean from your fitness tracker– like should you have a McDonald’s Cheeseburger or another glass of wine. These errors are really important if someone else is making decisions about you based on this data – like your employer deciding whether your participation in the company wellness program is adequate. Or your health insurance company deciding whether to deny you coverage based on your physical activity or lack thereof. Or your school tracking how much you exercise and what you eat and how much (and where) you sleep and giving you a grade for it.

Oral Roberts University, for example, beginning in the spring of 2016, required its incoming students to wear a Fitbit and encouraged them to log their personal data in the learning management system.

Also in 2016, the University of Michigan signed a $170 million deal with Nike. One provision of the contract allows Nike “to harvest personal data from Michigan athletes through the use of wearable technology like heart-rate monitors, GPS trackers and other devices that log myriad biological activities.”

Are these examples of “learner data”? They’re certainly examples of “student data,” right?

Whose data does the data collected by a fitness tracker belong to? What do the Terms of Service say? (You’ve read the Terms of Service, right?) What else, in addition to how many steps a wearer has taken in a day, do these devices track? What does the fitness tracker maker use this data for? Who does the fitness tracker maker share the data with? Who does the fitness tracker maker sell the data to? How long does the company retain it? Can a user request a copy of their data? Can the user delete it? These aren’t medically-approved devices, of course, but what is being collected is, no doubt, sensitive health data. Is that data safe, secure, private? Are there any legal protections regarding this data – that is, does it count as part of someone’s “medical record”?

What are the implications when we compel people – through health insurance or through employment or through the learning management system – to be monitored in this way?

The marketing tells us that this sort of tracking should be done for our own good, for our health and well-being. We should want to track and be tracked. The science? Well, the science, not so much. Indeed, one study published last year in the journal of the American Medical Association, found that those who wore fitness trackers lost less weight than those who did not.

Yes, that’s just one study. I hear a lot of people say – anecdotal data – that they like their fitness tracker because it motivates them to move. They say they like the “gamification” of exercise – earning points and badges, sharing their efforts via social media, and so on. They insist they need this extrinsic motivation as their intrinsic motivation simply isn’t enough. Not 10,000 steps worth of enough, that is.

And Americans have been tracking calories for quite some time now. Again, there’s a history here – why the calorie is the unit of measurement. Like the invention of the pedometer, there are many origin stories we could tell here – the development of the science of human nutrition in the early twentieth century. I’ll give you one name (because I’ve only mentioned men so far): Lulu Hunt Peters, an American doctor, who published the bestselling diet book Diet & Health: With Key to the Calories in 1918 and who popularized the idea that if you counted calories, you can lose weight.

500 years of pedometers. 100 years of counting calories. 50 years of connecting “steps” and “fitness.” Today’s fitness tracker isn’t new, but rather fits quite neatly into a long social and technological history. We are very accustomed to the stories about measuring these data-points for the sake of our personal health and well-being. There’s a cultural logic to the fitness tracker.

Of course, as the familiar saying (often misattributed to Einstein) goes, “Not everything that counts can be counted, and not everything that can be counted counts.”

Is this meaningful data? Are “steps” or “calories” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “health”? How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “good health”?

Those are questions we should consider regarding fitness trackers, sure. But they’re questions for all sorts of technologies – education and otherwise.

Please ask these questions when you hear the marketing for “learning analytics.” I’m going to re-state that previous paragraph:

Is this meaningful data? Are “test scores” or “grades” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “learning”? How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “a good student” or “a good teacher” or “a good education”?

Are learning analytics (or your fitness tracker) a way you can “know thyself”?

from Hack Education http://ift.tt/2rIePd2

Thrill Ride

It is evident that if an experience is extra-fun or especially if it is meaningful I will probably not blog about it… A memorable stretch in the UK for OER17 and more is the most recent example. And there is every chance I will never properly capture what I dug about this past week at the Domains Conference.

I commiserated on this syndrome with Keegan Long-Wheeler, and we made something of a pact to not let this happen here… That we would try to throw out a few bite-sized chunks and not obsess about nailing all the nuances. He’s doing it, so I’m gonna do it at least once too.

There were a lot of moments this week where I felt as if there was no point in time or space that I would rather be. This was one of them, during which I had the uncharacteristic presence of mind to pull out my phone and take a pic:

This was as the conference was winding down, with Dr. Jones spinning tunes, and that is Martha Burtis and Tom Woodward riffing, and Alan Levine riding the wave and chiming in…

To get a sense of why this felt so wild, I would urge you to check out Martha’s magnificent keynote, and Tom’s epic reveal of how he manages Ram Pages and of so much else. Both merit long thoughtful dives in themselves, but [he repeats to himself, “keep it small Brian”] gotta keep this one focused, keep it out of the draft folder mausoleum…

If I recall this moment correctly, Martha was talking about the implications of the WordPress REST API, suggesting we think of WordPress less as a publishing platform and more as something like an operating system for the web as platform. This was while Tom and Martha were trading licks like ace guitarists… When one of them would say to the other, “oh, you’ll like this…” you knew something cool was coming.

I tried in vain to keep up, but the best I could do was capture some URLs and email them to myself. I’ll reproduce those below… I intend to pursue this stuff further, if anyone has similar things they’d like to share or tips on how I can begin to understand them better, I’d be most eager.

I hope my glosses aren’t too inaccurate or embarrassing.

Digital Histology: Tom blogged about it here. What seemed notable was some slick work to make the integrated annotated images flow so smoothly and load so fast. This is a common theme in much of Tom’s current work… using the WP API and JSON to supercharge performance.

Georgetown Domains: this was also demo’ed during a fantastic session Tom co-presented with Marie Selvanadin and Yianna Vovides of Georgetown. The approach for assembling material from across many sites using the API/JSON offers huge speed improvements from existing syndication approaches such as FeedWordPress, and seems amazingly flexible and extensible… I kept having EduGlu flashbacks…

Martha shared particles.js, a JavaScript library… Play with those variable sliders on the right and enjoy the trip…

Data Visualization for an ecology course… exploring “the value of hypothesis visualizations — hypotheses that are more than words — as a way of considering how to analyze and visualize data.”

MathBox… more JavaScript work that takes it to the next level… Go deep on this one, there is lots and lots here. Not least this amazing presentation.

All this and lots more went by in what felt like a few dizzying moments, it was an ed tech thrill ride, my head swimming with possibilities, wondering how I could learn from and draw on this stuff, share it with our team at TRU, or the nascent BC Open Ed Tech Co-op. We’ll see what happens.


from abject learning http://ift.tt/2s9fl7S

The Histories of Personalized Learning

I delivered this talk today at the OEB MidSummit conference in Reykjavik, Iceland

I recognize that the theme of this conference is “shaping the future of learning” but I want to talk a little bit about the past. I want us to think about the ways in which the history of learning – how we tell that story – shapes the future of learning, and how the history of technology (education technology and otherwise) – and how we tell that story – shapes the future of technology. I want us to recognize there is a history even in the face of a fervent insistence that new, digital technologies are poised to sweep away traditional institutions and traditional practices. You know the stories: revolutions and disruptive innovations and other millennialist mythologies: the end of history, the end of work, the end of college, and so on.

You hear a lot of these sorts of proclamations when it comes to “personalized learning,” which is (increasingly) frequently invoked in direct opposition to some imagined or invented version of learning in the present or in the past. Education technologists and futurists (and pundits and politicians) like to provide these thumbnail sketches about what schooling has been like – unchanged for hundreds or thousands of years, some people (who are clearly not education historians) will try to convince you. They do so in order to make a particular point about their vision for what learning should be like. “The factory model of education” – this is the most common one – serves as a rhetorical and political foil against which reforms and technological interventions can be positioned. These sorts of sketches and catchphrases never capture the complex history of educational practices or institutions. (They’re not meant to. They’re slogans, not scholarship.) Nevertheless these imagined histories are often quite central to the premise that education technology is different and disruptive and new and, above all, necessary.

There is no readily agreed upon meaning of the phrase “personalized learning,” which probably helps its proponents wield these popularized tales about the history of education and then in turn laud it – “personalized learning,” whatever that is – as an exciting, new corrective to the ways they claim education has “traditionally” functioned (and in their estimation, of course, has failed).

“Personalized learning” can mean that students “move at their own pace” through lessons and assignments, for example, unlike those classrooms where everyone is expected to move through material together. (In an invented history of education, this has been the instructional arrangement for all of history.) Or “personalized learning” can mean that students have a say in what they learn – students determine topics they study and activities they undertake. “Personalized learning,” according to some definitions, is driven by students’ own interests and inquiry rather than by the demands or standards imposed by the instructor, the school, the state. “Personalized learning,” according to other definitions, is driven by students’ varied abilities or needs; it’s a way of navigating the requirements of school bureaucracies and requesting appropriate accommodations – “individualized education plans” and the like. Or “personalized learning” is the latest and greatest – some new endeavor that will be achieved, not through human attention or agency or through paperwork or policy but through computing technologies. That is, through monitoring and feedback, through automated assessment, and through the programmatic presentation of new or next materials to study.

“Personalized learning,” depending on how you define it, dates back to Rousseau. Or it dates back further still – to Alexander the Great’s tutor, some guy named Aristotle. It dates to the nineteenth century. Or to the twentieth century. It dates to the rise of progressive education theorists and practitioners. To John Dewey. Or to Maria Montessori. Or it dates to the rise of educational psychology. To B. F. Skinner. To Benjamin Bloom. It dates to special education-related legislation passed in the 1970s or to the laws passed the 1990s. Or it dates to computer scientist Alan Kay’s 1972 essay “A Personal Computer for Children of All Ages.” Or it dates to the Gates Foundation’s funding grants and political advocacy in the early 2000s. Take your pick. (Take your pick. Reveal your politics.)

I want to talk to you today about the history of personalized learning – in no small part because it’s taken on such political and financial and rhetorical significance. Andrew Keen alluded to this yesterday in his remarks about the efforts of Silicon Valley’s philanthro-venture-capitalism in shaping the future of education. Bill Gates and Mark Zuckerberg, for example, are plowing billions of dollars into “personalized learning” products and school reforms. That seems significant – particularly if we don’t understand or agree on what the phrase actually means. (That means, it seems likely, that these billionaires get to decide, not progressive educators.)

So, where did this concept of “personalized learning” originate? Who has propagated it? When? Why? How has the meaning of the phrase changed over time? That’s a lot to do in a 20 minute talk, so I’m going to offer you several histories, origins, and trajectories of “personalization” more broadly – as a cultural not just technological or pedagogical practice.

The OED dates the word “personalization” in print to the 1860s, but the definition that’s commonly used today – “The action of making something personal, or focused on or concerned with a certain individual or individuals; emphasis on or attention to individual persons or personal details” – dates to the turn of the twentieth century, to 1903 to be precise. “Individualization,” according to the OED, is much older; its first appearance in print was in 1746.

The Google Ngram Viewer, which is also based on material in print, suggests the frequency of these two terms’ usage – “individualization” and “personalization” – looks something like this:

In the late twentieth century, talk of “individualization” gave way to “personalization.” Why did our language shift? What happened circa 1995? (I wonder.)

Now, no doubt, individualism has been a core tenet of the modern era. It’s deeply enmeshed in Western history (and in American culture and identity in particular). I always find myself apologizing at some point that my talks are so deeply US-centric. But I contend you cannot analyze digital technologies and the business and politics of networks and computers without discussing how deeply embedded they are in what I’ve called the “Silicon Valley narrative” and in what others have labeled the “California ideology” – and that’s an ideology that draws heavily on radical individualism and on libertarianism.

It’s also an ideology – this “Silicon Valley narrative” – that is deeply intertwined with capitalism – contemporary capitalism, late-stage capitalism, global capitalism, venture capitalism, surveillance capitalism, whatever you prefer to call it.

Indeed, we can see “personalization” as both a product (and I mean quite literally a product) of and a response to the rise of post-war consumer capitalism. Monograms on mass-produced objects. Millions of towels and t-shirts and trucks and tchotchkes that are all identical except you can buy one with your name or your initials printed on it. “Personalization” acts as some sort of psychological balm, perhaps, to standardization.

A salve. Not a solution.

But “personalization” is not simply how we cope with our desire for individuality in an age of mass production, of course. It’s increasingly how we’re sold things. It’s how we are profiled, how we are segmented, how we are advertised to.

Here’s Wikipedia’s introduction to its entry on “personalization,” which I offer not because it’s definitive in any way but because it’s such a perfect encapsulation of how Internet culture sees itself, sees its history, tells its story, rationalizes its existence, frames its future:

Personalization, sometimes known as customization, consists of tailoring a service or product to accommodate specific individuals, sometimes tied to groups or segments of individuals. A wide variety of organizations use personalization to improve customer satisfaction, digital sales conversion, marketing results, branding, and improved website metrics, as well as for advertising.

How much of “personalized learning” as imagined and built and sold by tech companies is precisely this: metrics, marketing, conversion rates, customer satisfaction? (They just use different words, of course: “outcomes-based learning,” “learning analytics.”)

Online, “personalization” is how we – we the user and we the consumer as, let’s be clear, those are the frames – are convinced to take certain actions, buy certain products, click on certain buttons, see certain information (that is to say, learn certain things). “Personalization” is facilitated by the pervasive collection of data, which is used to profile and segment us. We enable this both by creating so much data (often unwittingly) and surrendering so much data (often voluntarily) when we use new, digital technologies. “The personal computer” and such.

(You know it’s “personal.” You get to change the background image. It’s “personalized,” just like that Coke bottle.)

The personal computer first emerged as a consumer product in the 1970s – decades after educational technologists and educational psychologists had argued that machines could “personalize” (or at the time, “individualize”) education.

Among these first teaching machines was the one built by Ohio State University psychology professor Sidney Pressey. His device, “the Automatic Teacher,” was constructed out of typewriter parts. He debuted it at the 1924 American Psychological Association meeting. A little window displayed a multiple choice question, and the student could press one of four keys to select the correct answer. The machine could be used to test a student – that is, to calculate how many right answers were chosen overall; or it could be used to “teach” – the next question would not be revealed until the student got the first one right, and a counter would keep track of how many tries it took.

The “Automatic Teacher” wasn’t Pressey’s first commercial move. In 1922 he and his wife published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

Yes, standardized testing had already become commonplace (in the American classroom at least) by the 1920s, and this practice placed a significant burden upon those teachers and clerks tasked with scoring them. Pressey argued that the automation of testing could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – it should free her for real teaching of the inspirational.” No doubt, these arguments echo those made today about how ed-tech will free the teacher for more individualized attention, instruction, and remediation.

But I think Pressey’s work also serves to underscore this other tension that we find throughout the twentieth century. This isn’t simply about “labor-saving devices” or instructional or administrative efficiency. The “Automatic Teacher” was also a technology of individualization, one that Pressey and others since have insisted was necessitated by the practices and systems of standardization in schools, by the practices and systems of mass education itself.

It’s significant, I think, that early teaching machines were developed by psychologists and justified by psychology – very much a science of the twentieth century. After all, psychology – as a practice, as a system – helped to define and theorize the individual, “the self.” Self-management. Self-reflection. Self-help. Self-control.

Individualization through teaching machines is therefore a therapeutic and an ideological intervention, one that’s supposed to act as a salve in a system of mass education. And this has been the project of education technology throughout the twentieth century.

I recognize that I put “pigeons” in the title of this talk and I haven’t yet made the connection between the history of personalization and the history of pigeon training. It’s there in the history of educational psychology, in the history of behavioral modification, in the history of teaching machines. But I opted to scrap the ending I’d originally written for this talk – one that, I promise, tied it all together. Instead of the pigeons of ed-tech, I feel compelled to end with some thoughts on the politics of ed-tech.

Institutions face an enormous crisis today – one of credibility and trust, one that Chris Hayes identified in 2012 in his book Twilight of the Elites. He argued that

We now operate in a world in which we can assume neither competence nor good faith from the authorities, and the consequences of this simple, devastating realization is the defining feature of American life at the end of this low, dishonest decade. Elite failure and the distrust it has spawned is the most powerful and least understood aspect of current politics and society. It structures and constrains the very process by which we gather facts, form opinions, and execute self-governance. It connects the Iraq War and the financial crisis, the Tea Party and MoveOn, the despair of laid-off autoworkers in Detroit to the foreclosed homeowners in Las Vegas and the residents of the Lower Ninth Ward in New Orleans: nothing seems to work. All the smart people fucked up, and no one seems willing to take responsibility.

We can add to Haye’s list, of course, more recent events: Brexit and Donald Trump and the latter’s withdrawal last week from the Paris Climate Accord. They can’t even get the weather report right, the President of the United States of America reportedly quipped to friends over golf; why should we trust climate scientists? This “death of expertise” has profound implications, no doubt, for the future of education, scholarship, teaching and learning, democracy. And, as Andrew Keen observed yesterday, we must consider the ways in which “populism” and “personalization” as cultural and political and economic forces might actually be intertwined – how the algorithmically-driven Facebook’s News Feed, most obviously, has only served to make things worse.

A journalist recently asked the US Secretary of Education about different rates of discipline for students of color and students with disabilities, and if this was a problem her office intended to address. Addressing the racial disparities in school discipline – and addressing this as a civil rights issue – had been a major focus of the Obama Administration’s final few months. Betsy DeVos responded, “I think that every student, every individual is unique and special and we need to be really intent on focusing on the needs of each individual student.”

For DeVos – and for many, many others – “personalized learning” means just this: “we need to be really intent on focusing on the needs to each individual student.” The needs of the individual to the benefit of the individual. But to DeVos – and to many, many others – exalting the freedom of the individual here also means freedom from government control (from government control over the education system). It’s not freedom from corporations, oh no; it’s freedom from the state and more explicitly freedom from the regulations that have been put in place in the last sixty years to try to force educational institutions to be more equitable. We heard Donald Clark argue yesterday that schools need to become unsafe spaces again, but let’s recognize that schools have never been “safe spaces” for most of the people on this planet.

When Betsy Devos and others say that “we need to be really intent on focusing on the needs to each individual student,” what she doesn’t add is that all risk, in this worldview, would fall on the individual as well, of course. In a world with no institutions – unbundled and disintermediated as Silicon Valley is clearly keen to do – there are no institutional protections. With no government oversight, there is no appeal to civil rights.

So this is our challenge in the face of those calling for “personalized learning” – the Betsy DeVoses and the Mark Zuckerbergs. And it’s our challenge, not only in education technology, but in democracies more generally: can we maintain a shared responsibility for one another when institutions are dismantled and disrupted? Will we have any semblance of collective justice in a “personalized,” algorithmically-driven world?

from Hack Education http://ift.tt/2t1Vx2S

Beyond the LMS


As the Australian delegate, on behalf of the whole continent I say Thanks for Having Me.


This is the context of my institution.


These are our campus locations.


For size and comparison – here’s an overlay of the US.

And I live and work from Wagga Wagga, one of our main campuses.


What I wanted to do today was talk about the Learning Management System, or more importantly what might come after it. But before we get to that I wanted to start with a look at the current state of play.


Over the last decade the LMS has become synonymous with online learning. The LMS has become the default. To many it defines what online leaning looks like, what’s possible and what it’s limitations are.


An element of control is baked into a centralised system. It’s one of its distinct features, but it has some significant side effects. One is an embedded power dynamic that prioritises institutional needs over students, which often reinforces didactic teaching methods where teaching is delivered from a central point.


It’s for that very reason that a key trait of the LMS is a lack of user Autonomy and Agency. Teachers and students lack any real ability to self-govern or to act on their own.


From a central space all can be observed, monitored and tracked. This surveillance is often marketed as “analytics”, and while it may indeed be able to offer some meaningful data it does so at the expense of dialogue and perhaps more importantly permission.


I’d suggest we have reached peak LMS. It has achieved saturation in the market so that is little growth left. For institutions we are now all beholden to innovation being provided to us by vendors and unable to offer substantially different products or services. When everything begins to look and feel the same the return for having an LMS will begin to diminish. Instead of being of value it starts to be a hinderance. If our goal is to create a distinctive curricula and learning experience, then the LMS simply can’t provide that.


There’s a lot of discussion around the concept of the Next Generation Learning Environment. It’s being touted as the solution to the current woes around the LMS.


I am however quite skeptical. You see it’s the same centralised model, with the same inherent problems, the same structures, the same limitations. Sure it’ll be better, faster, stronger – but it won’t change anything.


It won’t create change where it’s needed. It won’t change the visions of what Online Learning looks like.


I’d suggest that the LMS, the default, acts as a container for our thinking. Just like a vessel does with liquid – it shapes the contents. It shapes the performance and what we can imagine is possible. The container provides hard edges, limitations which define how and what we think and do.


So that’s the LMS, but what about our current practices in online learning?


One of the underlying problem is that online learning hasn’t yet adapted to the medium – the web. We’ve simply sought to recreate the physical classroom in the online space. This is what we do with any new medium – radio replicated the theatre, television replicated radio, the early web replicated print. What has happened in the online learning space is a continuation of the trednd – initially it tends to copy the old one, but over time it develops its own distinct form and function.


But online learning has yet to go through that phase. There are examples on the fringes – Downes, Siemens and Cormier’s work on connectivist and rhizomatic learning for example. But for the most part online learning is still an attempt to replicated the Physical Classroom in the medium of the web.


The fact that the biggest LMS company is called “Blackboard” is not a coincidence. Current online learning is an attempt to simply replicate classroom practice. It doesn’t embrace the web. It doesn’t seek to utilise the medium, instead it walls it all off and out.


The LMS also enforces arbitrary Time Boxes which have a profound impact on learning. Access tends to be limited to a session – 6 months – and then students are locked out. Despite the fact that they paid for the learning that occurred there – students no longer have access to the discussions, wikis or content that lives in the LMS. This Time Boxing effectively forces students to start from scratch every session – their profiles, their identity, their network gone. And there’s no way to come back. They can’t return after the fact – to revise, reread, rediscover – it reenforces this concept of learning as a linear processes, all done in step, together and at the same time.


This creates what I’ve called Learning on Rails. Similar to the style of video games where you are immersed in a realistic environment, but have no free movement to explore, simply to complete each task, one after the other. Online Learning tends to consist of linear tasks. Navigating information and working through content is done not as an exploration to hypertext document – but as a series of Next buttons.


So what could the future look like?


If we were looking at the medium itself, what is unique about it? How could we model pedagogies that utilise those traits and features?


Over the past two years working on developing a vision for what online learning could and should be. Based on the large body of educational research that exists, the aim has been to pull together a cohesive model that establishes clear elements to aide the design, development and delivery of online courses.


This is the online Learning model that we developed for CSU. We identified these key elements as part of a curriculum that encourages and enables engagement to occur.


We have also developed the Online Learning Exchange. There you will find a more extensive description of the model. We have also developed up a range of strategies that can be used to help implement the model into courses and used in the design, development and delivery process. The Mixer is a tool to map out practices and how intense their adoption is, the aim to understand that there could and should be variations in the levels depending on the subject and discipline area. It also may help diagnose issues and help direct the design process. The applications area provides case studies and how the model might be adapted over a course, tying in delivery methods and techniques as well as tying in assessments.


This body of work is aimed at moving the university – the largest provider of distance education – into a new way of doing things. It’s a realisation that the while the correspondence model of education works, it has significant inherent problems too. Going online had provided us and many others with a cheaper and easier way, but it also provides an opportunity to rethink what and how we do things.


What our work on the online learning model has uncovered are large gaps between what we want to be able to achieve pedagogically and what the technology, primarily the LMS, is actually capable of.

… but ..


At the moment the LMS is necessary in many cases. It provides a backbone and integrations with administrative functions.


There are issues to do with scale and operations that a Domains program isn’t ready to handle or set up to do. The practical perspective is that if we want to do away with the LMS – then we have to develop a viable alternative.


So let’s look at one way of getting there.


The alternative to the centralised systems, and what I think is the key to changing and transforming what online learning looks like, is moving to a distributed system. The Internet is a Distributed system. It’s success comes from that underlying infrastructure – one that is shared and open. Designed to be more resilient to breakdowns and less vulnerable to attack it also acts to distribute power so it is less abused and better reflects needs of all stakeholders, especially the small and weak.


It’s for this reason that the internet has become a place where emergence happens, fostering more innovation and discovery because people are empowered to do so. They have autonomy and agency within this structure and the ability to carve out and create their own personal and virtual spaces to share.


Domains themselves are a distributed system. They provide each individual a space for autonomous creativity and expression.


They embody these key elements that we want online learning to look like.


But Domains are really only the infrastructure. They provide a mechanism to get things done, but not the method to do achieve it. We have to build – something on top of that solid infrastructure.


My personal journey has been very much focussed on how can we replace the LMS. A couple of years ago a started thinking about how could we do this. What would it look like. And I started with – MYOS. What if we created something like an operating system where we can run our all of our own apps.


And while I think its a sound idea – the reality is that this is too big a task. Recreating applications is a hell of a lot of work, but increasingly we don’t need to do anymore. The main reason is the rise of the API.


Application Programming Interfaces provide a way for different systems and applications to interact with each other. They can share data, send messages to each other and trigger routines to provide different inputs and outputs. APIs are how apps talk to each other. They’ve been part of large applications for a long time but were locked away in the source code. Today though more and more applications are running on the web – and have opened up their APIs. This means developers can link up services – you can cross post social


Last year at the Indie EdTech meetup we started discussing this idea of the Personal API. And having got to spent some time with and listening to Kin Lane I started to realise that you don’t need to run all your own apps. You just need a way to be in control and to coordinate different service and get them to talk to each other. This way if you want to use Dropbox for storage or Amazon S3 – that’s fine. If you want to WordPress or Known – that’s cool too. Utilising a variety of existing web service is actually preferable to building your own, because then it is personal – it’s up to you as an individual to make choices. In this world Agency and Autonomy are baked in.


Utilising APIs at an individual level takes this concept of the web – small pieces loosely joined – and makes it a much more serious proposition. As an individual all these “life bits” are connected – to me. And if I can programmatically control those bit – how they connect and interact, then all of a sudden we have a very new and very powerful tool. At the same time if we utilise existing systems/application then we also have something thats very light weight in terms of development.


So my idea is to develop up a Distributed Learning System. A structured way of utilising a range of technologies to configure a viable alternative to the LMS.


Combining these three key elements we can create a truly Distributed Learning System.


Utilising Domains as the Infrastructre, Applications for functionality and APIs to connect everything together and allow data to pushed and pulled throughout the system.


In this system each student and staff member would have their own System running on their Domain – A Node. Nodes would act as their own entity. They would act as federated points – able to act autonomously but designed so they allow for connections to be made.


And Nodes would connect to Hubs. These could represent subjects or courses – but are much more flexible than that. They could be set up for projects, research, committees and collaboration. Hubs define relationships between Nodes – they allow an agreed set of rules to be developed and define the nature of the relationship – What data will be shared, who with and for how long.


Hubs establish relationships and conventions that allow aggregation and sharing to occur between nodes.


They allow content to be moved around, communication to occur, assessments to be submitted and feedback to be shared.


This map provides a way of understanding how the different components of the system all fit together. The Node acts to manage the Personal side of the equation – connecting apps together and features that are part of the Domains infrastructure.

At the same time the Hubs link together institutional systems and data with those of the nodes. They establish a handshake agreement between parties to ensure that the relationship is negoatiated, data isn’t simply made available and vacuumed up by the institution.


But this isn’t a one-to-one relationship – this is multiple nodes connected. Moving data between students and teachers


Students and teachers would interact with their own node – not the LMS or another site. Their Node. Messages from the hub could be displayed, communications sent back and forth, content would be federated so that students could read, annotate and interact with their materials that they would main them forever. All the tools we currently use for learning would all still work – but in a way that is shaped by the student, and with them having access and choice.


The radical of this that students would be able to take their learning with them wherever they please – and for as long as they please. They wouldn’t be chained or confined by the institutions any more. We could start to see this concept of lifelong learning actually be supported by the technology. Students would retain copies of their learning and they would be able to use it how they wish. Creating eportfolios, showreels, blog posts – reusing and repurposing their learning.


At the moment Domain of Ones Own relies on co-opting open source applications to achieve certain needs. Blogging tools like WordPress have become powerful tools in the hands of skilled artisans, but outside of those few individuals have we gone much further than simply blogging? But what if we used those applications the infrastructure of Domain of Ones Own to develop tools specifically for learning?


I still refer to this post from Andrew Rikard when thinking about domains and students. One way of the thinking of the Distributed Learning System is to provide a way to unbudle the students learning from their domain and digital identity. By using APIs we can be more focussed on who and how we share content. And in doing to it not only provides a safe space for students to work in – and still engage with the web, but to do it on their terms, while maintaining their authority and avoiding turning domains into the next checkbox assessment.


The real potential of the DLS is the development of new applications and tools. To utilise the latest technologies and the open nature of the web and software to create new applications that focus on learning, on a pedagogy of the web. We develop methods of learning that are of the web, and are based on discovery, exploration, creativity and reflection.


At the moment there are some fantastic innovations happening out there – over this conference we’ve been introduced to just some of them. But we’re dealing with blips at the moment. To improve and make a difference to online learning and perhaps education as a whole, we have to work to share our knowledge and experiences. We also need to make it easy to adopt them. To learn from one another and our experiences. By creating a system, and while that word may scare some people, it’s really just a way of working, we could share more easily. APIs provide a way that we could share, adopt and adapt new developments more easily – between users and institutions. Having some commonality would provide some cohesion


Moving into the future – if we want to develop and deliver a truly distinctive curricula and learning experience. One that I think would produce the kind of graduates that would thrive in the future. A distributed learning system radically changes the possibilities and provides a way to really develop self directed learner. Providing students with a level of autonomy and agency that is simply not possible within in the LMS and centralised systems, they will develop the skills to manage and define their own learning in a life long way.

If you have comments, questions or ideas – let me know. I’d love to hear them and bounce ideas around.

from Heart | Soul | Machine http://ift.tt/2sZB4eW

The Digital Revolution Will Not Be Powerpointed (nor MOOCed)

I’m not a MOOC completer. I am zero for life.

My longest duration has been maybe 2 weeks, and more typically, much less.

I am not denying that others get value of them and develop rich connective experiences. But I find the recipe structure and the activity behind a login as lifeless as the job of observing paint drying. Why must all courses be so– “course-y”?

Perhaps one day I will meet the right one. One exception that has been pointed out to me is ModPo or Modern Poetry that seems to defy the bore factor and exists in parts outside the MOOC box. I think that’s the one I stayed in the longest.

Maybe I am just a crappy student.

Kate Bowles told me about The Active Citizen in the Digital Age; she said the NovoEd platform was worthwhile for it’s use of small groups.

I was truly interested, I swear! After co-teaching an open course (not a MOOC) with Mia Zamora in Networked Narratives where we had students engage in issues and use tools on the web to communicate, express, network… well I thought I could learn more for the next time.

The course is offered by Stanford, you know the place where some believe MOOCs were invented (Canadians get no respect).

There is a highly polished intro video with the two profs [at Stanford], starting with the flyover of the Stanford Campus, moving upbeat music, and voiceovers over photos of people [Not at Stanford] doing activism.

And then it’s the structure. Little modules. Sequenced logically. Start with 3 objectives. Watch a video of people telling me stuff I can read. An article to read. An assignment to say something in a forum.

And here we go, the assignment for week 1:

What are the actions you take today in the three sectors of democracy (politics, the marketplace and civil society) to achieve impact in the issues you care about?

You have three options for submission. (1) Create a Powerpoint with images, or (2) record a brief video of up to 3 minutes, or (3) write a few paragraphs that tells us about your actions in the three sectors.

Okay, I did jump on the Powerpoint, especially because for week 2 they were shared as examples of quality work.


Please show me one place in the world people are exercising their activism online using freaking Powerpoint.

The assignment for week 2 is to form teams. And write a Mission Statement for our team.

Few phrases launch me more deeply into a coma than “mission statement” — those iceberg lettuce word salads where every ounce of humanity is squeezed out, swept away.

At the same time I am not keeping up with the MOOC, I am deeply engrossed in reading Zeynep Tufekci’s Twitter and Tear Gas with her direct experience participating in the Mexican Zapatista uprisings in, Arab Spring, the Occupy Movement, and the protests in her home country of Turkey in Gezi Park.

I have yet to read of any of these efforts starting with people crafting a mission statement, certainly not the 140journos group in Turkey:

These young people had indeed thought about the project one day and started it the nest. The details of what they wanted to do were vague; turn social media into a platform for journalism, break the censorship they knew dominated mass media, and become intermediaries for the public. They did not know what the result would be, or that it would turn out to play a crucial role. With all the digital technologies at their disposal, they could start building, and ask questions later.

Well maybe once they checked off their course objectives. On a rubric.

The revolution will not be Powerpointed.

modified the text from “#nowplaying Gil Scott-Heron The Revolution Will Not Be Televised #LP” flickr photo by nworbleahcim http://ift.tt/2qiehcp shared under a Creative Commons (BY-SA) license

I find most courses, also academic presentations ignore the basic tenants of film and storytelling to lead strong, to hook people in, and to take them on a story shape of a journey where the end is not revealed at the start.

We are living right now in a momentous, disturbing, maybe even social destructive time of upheaval; how can you not lead with what is going on all around us? There is activism and suppression happening every day, but instead, lead with more or less on the three sectors of democracy.

And this is an example of teaching on the web, using that beautiful HTTP protocal as simply a delivery mechanism. Keep stuff behind the login. Do not venture out and use the platforms and media where digital activism is happening.


I am most certainly being judgemental, but I cannot be part of such a cloistered, bubbled experience. I cannot see any relevance to what is happening right now in this *****ed up world.

In Networked Narratives we did not make civic activism an overt goal, in fact we wanted to have our students create web media and narratives, alternative truths, exploring/creating identity, having open networked experiences, so that, when their awareness get more awakened, they have some tools and skills at hands to do something. More than creating memes and gifs and twitter bots for the laughs, but for a purpose. Their purpose.

We’d never do group mission statements. We’d do hashtags. We’d define a meme image to represent us. We’d develop code language for covert communication on open channels.

We had no budget beyond Mia’s role as a professor and the meager amounts I get for being adjunct. I splurged $35 on a vanity domain and hosted it on my own Reclaim Hosting account. We used free open blogs, annotation tools, media tools. Our “high end” video production was using Google Hangouts to record and my own goofy editing in iMovie.

Our students created memes, gifs, explored bots, learned code ciphers, annotated readings, they visited narrative experts in their place of creativity and conversed on issues with others in Mexico, Vermont, Australia, and Egypt. Our course videos were ‘hacked’ by mysterious entities who turned out to be interested in our fate. We more or less ran TeachProv every week. Students created alternative personalities, gave them voice with twitter bots, explored problems of the Pepsi ad and Slacktivism.

I’m by means trying to say our course is any better. I am not sure we really “taught” anyone the underpinning theory of activism. But what we did exists and lives in the open web, not just on it, not hidden behind a login. We did this in the places where digital activism happens.

We did stuff. And it was not Powerpoint. Not mission statements.

The digital Revolution, it will not be Powerpointed, and you will not understand it inside the glass bubble of a MOOC.

Extra Extra

This video was generated by Sarah Honeychurch out of the pile of words in this post with Lumen 5

Featured Image: Generic screenshot of Powerpoint editor with a little bit of re-writing the template placeholders. Very questionable rights to attribute. Do I stretch the invocation of fair use (remix for parody) or just go WTFPL?

The post “The Digital Revolution Will Not Be Powerpointed (nor MOOCed)” was originally pulled like taffy through a needle’s eye at CogDogBlog (http://ift.tt/2qiaegu) on May 22, 2017.

from CogDogBlog http://ift.tt/2qiaegu