Micro.blog Project Surges Past $65K on Kickstarter, Gains Backing from DreamHost

With one week remaining on its Kickstarter campaign, the Micro.blog indie microblogging project has surged past its original $10K funding goal with $66,710 pledged by 2,381 backers. This puts project creator Manton Reece closer to his stretch goal of $80K, which would enable him to develop a Safe Replies feature to preemptively combat abuse on the platform and hire a part-time community manager.

Micro.blog also picked up support from DreamHost this week, pushing the project past the $50K mark. The hosting company pledged $5,000 towards the campaign.

“What ever happened to the vision of the open web as a distributed network of websites that were owned by their creators?” said Jonathan LaCour, SVP of Product and Technology at DreamHost. “We’d like to make it as easy as possible to launch a WordPress-powered microblog on DreamHost that integrates well with Manton’s upcoming Micro.blog service.”

DreamHost (and all other hosting companies) obviously have a vested interest in getting people to see the need to have their own digital presence. However, the biggest obstacle for WordPress customers is making it convenient to join the IndieWeb. DreamHost is planning to take its support of Micro.blog one step further and create an easy way for customers to get started with independent microblogs.

“As a followup to our contribution to Manton’s Kickstarter campaign, we’re planning on working on making a streamlined, pre-configured Indie microblog with WordPress at DreamHost,” LaCour said in the #indieweb channel on IRC yesterday. “I tend to agree that a simplified, pre-packaged WordPress setup would go a long way to driving Indieweb adoption.”

When asked whether the company would be utilizing Micro.blog or some other service, LaCour said it has not been decided yet. He said the idea is that people could create an independent microblog hosted at DreamHost that is compatible with Micro.blog and other indie microblogs.

“Our major focus at the moment is getting people excited about owning their own website (and entire digital identity),” LaCour said.

Micro.blog is Aiming for Incremental Webmention Support

Webmention is a protocol similar to pingback for notifying a URL when a website links to it and also for requesting notifications when another site mentions one of your URLs. It is an important part of facilitating decentralized communication across the web. On January 12, 2017, the Social Web Working Group published a W3C Recommendation of Webmention with the specification contributed by the IndieWeb community.

WordPress doesn’t natively offer Webmention support and the core trac ticket for adding the feature has had little discussion.

During a preliminary discussion on Slack last year, WordPress lead developer Dion Hulse said he thought Webmentions would be a great feature plugin and that there are a few people interested in it. There hasn’t been much movement on this front in core, but a Webmention plugin is available in the directory.

Reece is working on incorporating IndieWeb protocols into Micro.blog but said it will likely launch with incremental support for Webmention.

“It might take a little while to get everything IndieWeb in there, but that’s the eventual goal,” Reece said. “I’m committed to Micropub and microformats and still exploring how best to support Webmention. (It might be partial support with more later.)”

Micro.blog doesn’t currently handle mentions and replies using Webmention but Reece said his eventual goal is to include it.

“The first step to me is getting more people their own microblog so that the infrastructure for cross-site replies is even possible,” Reece said.

Micro.blog Puts the Focus on Indie Microblogging, Instead of Replacing Twitter

Reece also launched a Slack community where the project’s backers can discuss Micro.blog and other microblogging topics. He said he initially had reservations with starting something on Slack but was surprised to see the community has already grown to more than 300 members.

“I didn’t want to distract from any posts that should happen in the open on blogs,” Reece said. “Some discussion just fits better in chat, though. There’s an emerging community of indie microbloggers. Having a place to share tips, tools, and ask questions about Micro.blog just makes sense.”

Many of the project’s backers are eager to create a community of their own and are interested in using Micro.blog as a Twitter replacement. Other services have attempted to provide alternatives to posting directly on Twitter but none have caught on enough to significantly push IndieWeb adoption forward. App.net, one of the most promising ad-free, microblogging networks, went into maintenance mode in 2014 and will be shutting down March 15, 2017.

Reece, who was an early fan of App.net, published a thank you note to the service’s creators for trying something risky and creating a community around their ideas. He believes it’s the right time for another open platform to emerge.

“We don’t need just another Twitter or Facebook clone,” Reece said. “We need a new platform that encourages blogging on the open web.”

Nevertheless, Reece is preparing Micro.blog from the outset to be capable of replacing Twitter’s functionality, which is one of the reasons he is focusing so heavily on ensuring the platform doesn’t get overrun with abuse. Reece wants to avoid the pitfalls that have contributed to some of the more negative aspects of Twitter, but his focus is on encouraging people to blog from their own space.

“Micro.blog is a success if more people blog,” Reece said. “To provide value it doesn’t need to replace Twitter, but it can.”

The project’s mobile app is key to making it convenient for users to read other people’s posts and post directly to their own websites from the same interface. Reece shared another preview of the iPhone and iPad app that will be ready at launch and said he hopes there will be other apps developed by the community.

“Most RSS traditional readers can’t post,” Reece said. “I think this makes for a more complete experience, and because it’s just a blog I can still use other apps and platforms to post.” He plans to give Micro.blog a 280 character limit before truncating the post.

Keeping the timeline fast and making posting convenient will be critical to the platform’s success as an alternative to the dominant social media silos. Polling blogs for new content is not very aggressive in the current prototype but Reece is tuning this to provide a better experience. The platform uses rssCloud and WebSub (formerly PubSubHubbub) to provide a more Twitter-like, real-time experience.

Micro.blog seems to be landing at the right time, as the idea has already resonated with more than 2,300 people willing to back the project. The service hasn’t even launched but the concept behind it is already attracting a supportive community eager to explore better ways of powering microblogging on the web.

“You don’t replace Twitter overnight, or even try to,” Reece said. “But step by step, we’re going to end up with a better web, and I think independent microblogging is part of that.”

from WordPress Tavern http://ift.tt/2k6U3D2
via IFTTT

Collecting Student Work with Google Forms

File folders organized in a file boxA good number of us here at ProfHacker prefer to avoid paper whenever possible. When I teach my writing course each fall, I have my students use Google Documents so that it’s easy to see an essay’s development over time.

For classes where it’s not essential that I see a student’s revisions, I prefer that essays be submitted in PDF format, so that I can comment on essays using my iPad. (My current favorite app for this purpose is PDFExpert; Jason and Erin have both made use of iAnnotate.)

What I’ve always found a bit of a bother, though, is organizing those PDFs. I’d looked around a bit, but I hadn’t found a good way to streamline the process.

Until recently, that is. Not long ago, I was checking something for a colleague in Google Forms, and I discovered a feature that apparently rolled out back in October: the ability to upload files.

This feature provides the kind of streamlined organization process I was looking for. For assignment submission, I can provide students with a link to a Google Form that contains a file upload question:

A file upload question in Google Forms

If I want to ensure that students submit their essays in PDF format, I can set the question to allow only that file type:

Restricting allowed file upload types in Google Forms

The results spreadsheet will contain links to files that students upload.

What makes this feature so useful to me, though, is that Google Forms creates a folder corresponding to the form. Within that folder, it creates one folder for each question of the “file upload” type. That folder, as one might expect, contains all files uploaded in response to the question.

Because PDFExpert allows me to sync particular Google Drive folders with the app, there’s nothing I need to do to organize my students’ essays once I’ve collected them via the form. I can open essays in PDFExpert and mark them up as appropriate; all of my annotations will sync with the relevant Google Drive folder. I can then return students’ work by sending each student a link to her marked essay (alternatively, I could send it to her as an attachment).

The only drawbacks I’ve discovered so far are (1) that the feature is only available to G Suite users and (2) that only users within one’s own organization can upload files. So the feature’s not available for personal accounts, or for those who have a grandfathered Google Apps for Your Domain setup. Anyone working at a campus already using Google apps, though, should be able to take advantage of the new(-ish) feature.

What about you? If you have favorite tools or methods for collecting and organizing student work, let us know in the comments.

[Lead image CC-licensed by Flickr user Becky Wetherington]

from ProfHacker http://ift.tt/2iA7VSV
via IFTTT

2016 Flickring By Too

Just one more under the wire blog post for 2017, and yes it’s about photos. I had to wait until I posted photo number 366 for this year…

2016/366/366 Staying Grounded
2016/366/366 Staying Grounded flickr photo by cogdogblog shared under a Public Domain Dedication Creative Commons ( CC0 ) license

… so I could try John Johnston’s “wee little” script he posted that generates a rapid montage of a series of flickr photos. There was a now extinct site called “pummelvision” that did this automatically (I made one in 2011).

John is among the people I admire for his reclaiming ingenuity, and his script worked perfectly (I took out the datesince parameter John since mine were all tagged). It chugged about 5 minutes downloading all the images, and then the command line thing he provided just churned and out popped a movie.

This is way better than fully automated site that disappears on you.

https://player.vimeo.com/video/197658605

I also ran his “averaging” script, no idea what it means, there is a pale orange blob in the middle. It’s probably felix shaped

The average of all my daily photos.

Thanks for the present, John!


Top / Featured Image: The ImageMagick generated montage of all my flickr daily photos generated with John Johnston’s script – since they are all licensed CCO so is the montage. Because I can.

The post "2016 Flickring By Too" was originally pulled like taffy through a needle’s eye at CogDogBlog (http://ift.tt/2j5cf0d) on December 31, 2016.

from CogDogBlog http://ift.tt/2j5cf0d
via IFTTT

The Books I Read in 2016

At the end of each year I list the books that I have read during that year. Earlier years were 2015, 2014, 2013 and 2012. Below you will find the list of books that I’ve read in 2016. Every year I also include an overview of my other media consumption habits (magazines, RSS feeds and podcasts).

This year I had planned to make more conscious decisions about what I would read and this should include more female, more non-Western and more non-white authors. I also wanted to read more books that were at least 30 years old. I managed to read 53 books in 2016. About 25% of the books that I read were written by women (that is the same percentage as last year), and about half of the books did not come from the US or the UK (but the majority still were ‘Western’). Most books were quite recent. I guess you could say that I failed to reach most of my goals.

I’ve ordered the list of books into categories that make sense to me. These are the books that I’ve read and what I thought of some of them:

Digital rights

Ai Weiwei is one of my heroes and the book that was made by FOAM clearly shows why: he has experimented with surviving total surveillance. Ruben Pater wrote a beautifully designed book about the fact that there is politics in every design (I wish developers would start realising this about their code). Nissenbaum’s book gave me a new way of framing the privacy debate and the Dutch bestseller by Martijn and Tokmetzis had an inspiring final chapter enumerating the lessons that digital rights activists can learn from activism around climate change.

  • Ai Weiwei — Freedom of Expression Under Surveillance (link)
  • Ruben Pater — Politics of design (link)
  • Helen F. Nissenbaum — Privacy in Context (link)
  • Center for Long-Term Cybersecurity — Cybersecurity Futures 2020 (link)
  • Tijmen Schep — Design my privacy (link)
  • Maurits Martijn and Dimitri Tokmetzis — Je hebt wél iets te verbergen (link)
  • Frank Pasquale — The Black Box Society: The Secret Algorithms That Control Money and Information (link)
  • Wetenschappelijke Raad voor het Regeringsbeleid — WRR Rapport 94 – De publieke kern van het internet (link)
  • Inez Weski — De jacht op het recht (link)
  • Byung-Chul Han — De vermoeide samenleving (link)

B00k C7ub 4 N3rd$

We managed to read seven books with our book club (which is the same number as in each of the previous two years). Cathy O’Neil just might have written the Silent Spring of our age. The book by Christian and Griffiths was a fresh way of looking at how algorithms, but was also a bit formulaic. You can’t say that about Frank Westerman’s book in which he explores whether language can be a weapon against terrorism. Dan Lyons wasted my time with an awful book which was my least favourite read of the year.

  • Cathy O’Neil — Weapons of Math Destruction (link)
  • Brian Christian and Tom Griffiths — Algorithms to Live by (link)
  • Frank Westerman — Een woord een woord (link)
  • Axel M. Arnbak — Securing Private Communications (link)
  • Astra Taylor — The People’s Platform: Taking Back Power and Culture in the Digital Age (link)
  • Andrei Soldatov and Irina Borogan — The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries (link)
  • Dan Lyons — Disrupted (link)

Philosophy

Last September I started studying philosophy at the University of Amsterdam. I should be able to get my masters degree in about two years of fulltime study (next to a fulltime job). Next to reading a lot of articles, I’ve also read a few books. It was truly a joy to finally do a close reading of Hobbes’ Leviathan.

  • Thomas Hobbes — Leviathan (link)
  • David Miller — The Liberty Reader (link)
  • Michiel Leezenberg and G. de Vries — Wetenschapsfilosofie voor geesteswetenschappen (link)

Self improvement

I guess it does say something about me that I am constantly seduced by self improvement books. This year I apparently wanted to get better in organising my time, become more disciplined, write more clearly and budget smarter. Highlights were Minto’s classic book about writing persuasive business texts, Newport’s idea of ‘deep work’ as something that we need to try and attain as much as possible during our working hours and McGonigal’s science-infused explanation of willpower as something that can be depleted, replenished and trained. Linenberger made me change my to-do list habits for the better (which is quite an accomplished because they were well ingrained).

  • Barbara Minto — The Pyramid Principle (link)
  • Cal Newport — Deep Work (link)
  • Michael Linenberger — The One Minute To-Do List: Quickly Get Your Chaos Completely Under Control (link)
  • Kelly McGonigal — The Willpower Instinct (link)
  • Greg McKeown — Essentialism (link)
  • Jesse Mecham — Four Rules, A primer on living well, within your means (link)
  • Jim Benson and Tonianne Demaria Barry — Personal Kanban (link)
  • Daniel J. Levitin — The Organized Mind (link)

Fiction

There is a huge discrepancy between how much I enjoy reading fiction and how much I actually do read it. All four of these books were quite incredible. I am convincend that Adichie will one day receive the Nobel prize for literature. I finally read her debut novel and was overwhelmed. The Sellout is a novel unlike any other, I perceived it as a relentless attack on my brain. And Maya Angelou youth in the thirties of the US was both shocking and courageous.

  • Chimamanda Ngozi Adichie — Purple Hibiscus (link)
  • Paul Beatty — The Sellout (link)
  • Maya Angelou — I Know Why the Caged Bird Sings (link)
  • Chris Kraus — Summer of Hate (link)

Non-fiction

Coates wrote a brutal book that gave me a visceral reaction and forced me to rethink my position in life. Michael Pollan went on a cooking journey and was so kind to take me along the way. Jessica Abel unlocked the secrets behind the incredible quality of American podcasting and somehow managed to do this in a comic. Jon Ronson and Joris van Casteren both manage to elucidate serious themes with an often hilarious ironic undertone. I had a little war strategy theme going with Che’s book about guerilla warfare and with Richards’ application of Boyd’s thinking to the world of business. Finally, I will certainly vote for the okapi.

  • Ta-Nehisi Coates — Between the World and Me (link)
  • Michael Pollan — Cooked, A Natural History of Transformation (link)
  • Maxim Februari — De maakbare man: notities over transseksualiteit (link)
  • Jessica Abel — Out on the Wire (link)
  • Edward van de Vendel and Martijn van der Linden — Stem op de okapi (link)
  • Geoff Manaugh — A Burglar’s Guide to the City (link)
  • Tim Flannery — Atmosphere of Hope (link)
  • Julius Fast — Body Language (link)
  • Chet Richards — Certain to Win: The Strategy of John Boyd, Applied to Business (link)
  • Joris Luyendijk — Een goede man slaat soms zijn vrouw (link)
  • Erik Kessels and Erik Kessels — Failed It! (link)
  • Ernesto Che Guevara — Guerrilla Warfare (link)
  • Brian J. Robertson — Holacracy (link)
  • Achille Mbembe — Kritiek van de zwarte rede (link)
  • Jon Ronson — Lost at Sea (link)
  • Joris van Casteren — Mensen op Mars (link)
  • Femke Halsema — Pluche (link)
  • Avinash Dixit and Barry J. Nalebuff — The Art of Strategy: A Game Theorist’s Guide to Success in Business and Life (link)
  • Per Espen Stoknes — What We Think About When We (Try Not to) Think About Global Warming (link)
  • Anoniem — CubaConga (link)
  • Mandy Macdonald — Cultuur Bewust! – Cuba (link)

My consumption of other media

Last year I continued my subscriptions of De Correspondent (I usually read one or two of their articles in the morning), the New York Review of Books and the Wired (which I barely take the time to read, but still can’t say goodbye to).

There are a few ‘curators’ who make my life easier by scouring the web and writing up what they find interesting. Stephen Downes and Audrey Watters continue to be my connection to the world of learning. The maintainer of the Dutch Privacy Nieuws website does an incredible job of keeping up-to-date with all the latest privacy related news items. And Cory Doctorow has this wonderful and insightful take on the digital world. Unfortunately I can’t read his pieces in my RSS reader (the items are no longer full text) and I am forced to go to Boing Boing’s ad-infested website. I would much rather pay a little to have the full text via RSS, but don’t think that is currently possible. I might have to create a little personal scraper to solve this problem. My daily news comes from through the Trouw RSS feed. I try to read everything that Ta-Nehisi Coates writes for the Atlantic, keep my eye out for Morozov in the Guardian and am delighted whenever Maciej Cegłowski posts something new. I also follow The Intercept, the technology sections of Nu.nl, Tweakers and the Guardian, and Wired’s security blog.

I did find a bit more time to listen to podcasts every week. I still listen to each and every episode of This American Life, This Week in Tech (with Leo Laporte), 99% Invisible, Radiolab, Reply All and Note to Self. New must-listens are Dipsaus, Een Podcast over Media, Strangers, and Bits of Freedom’s own Inser_User. I then cherry pick episodes that might be interesting from Benjamin Walker’s Theory of Everything, the Bret Easton Ellis Podcast, De Correspondent (they are doing a nice series made by high school students), Freakonomics Radio, Planet Money, Security Now, Stuff Mom Never Told You (which seems to have created their final episode last week), Team Human (with the incredibly articulate Douglas Rushkoff), Tech Weekly (by the Guardian), The Tim Ferriss Show and Triangulation. Radiolab also made this fabulous series about the US supreme court called More Perfect.

What will I be reading in 2017?

Unfortunately I know that I will get to read much less in 2017: my studies will dictate what I’ll have to spend my time reading on: mostly articles rather than books.

I will be very happy if I manage to read one book every two weeks. Looking forward to doing just that!

from Hans de Zwart: Technology as a Solution… http://ift.tt/2j5ivoQ
via IFTTT

I’m a Reclaimer

Walking home from my friend’s Christmas dinner I was thinking, of all things, about how tired people seem to be of social media and the web, and a lot of resignation to how eff-ed up it can feel. Many of the blogs I relished reading have just gone dark, un-updated in my RSS Reader.

And also remembered Jim Groom’s announcement of a record-fair inspired Domains conference this June in Oklahoma. That’s going to be great, but I think we should all band together, turn on our domains, our blog, bring others along into the fray– this ought to be the year to rise up and Reclaim the Web.

A 1960s sugar covered song popped into my head. And I am humming out loud, “I’m a Reclaimer”.

It was time to pull out the pen and paper, re-write the lyrics, grab some guitar tabs, and do an edtechcover.

This rough crowing was about 30 minutes of trying it out, and recorded in one take. There are goofs and off-key yelling.

http://ift.tt/2hAZmul

And my lyrics:

G                    D           G
I thought the web was only for status updates

G               D                G
Built by someone else but not by me.

C              G
Trackers are out to sell me

C             G
That's the way it seems.

C             G                    D  (stop)
Surveillance transmitting all of my feeds.


G       C      G     G       C      G    
Then I got a domain, now I'm a reclaimer

G       C        G          C
Not a shred of me stored in their cloud

C       G (1)     C (1)           G (1)     
I'm in charge,    oh     I'm a reclaimer!

        F (1)        (riff)
Couldn't login  there if I tried


G                    D           G
I thought likes were more or less a shared thing,

G               D                G
Seems the more I gave the less I got.

C              G
What's the use in tweetin'?

C             G
All you get is spam.

C             G                    D  (stop)
When I needed insight I got ads.



G       C      G     G       C      G    
Then I got a domain, now I'm a reclaimer

G       C        G          C
Not a shred of me stored in their cloud

C       G (1)     C (1)           G (1)     
I'm in charge,    oh     I'm a reclaimer!

        F (1)        (riff)
Couldn't login  there if I tried


C              G
Trackers are out to sell me

C             G
That's the way it seemed.

C             G                    D  (stop)
Surveillance transmitting all of my feeds.


G       C      G     G       C      G    
Then I got a domain, now I'm a reclaimer

G       C        G          C
Not a shred of me stored in their cloud

C       G (1)     C (1)           G (1)     
I'm in charge,    oh     I'm a reclaimer!

        F (1)        (riff)
Couldn't login  there if I tried


I'm a reclaimer, I'm a reclaimer yeah yeah

Skip the resolutions about weight loss and running marathons, commit yourself to pry away (not completely, but reclaim your time, your data) from the shallow muck hole of social media, and get back to your blog, Jojo, get back to where you once belong (another cover to do).

“It’s too hard” does not wash at all. Look how lousy a musician I am, but I can do what I did in 2 hours, with the help of lyrics and chords shared on the internet, open source Audacity software, and a free account on SoundCloud.

Or start a new one. Or give a friend the gift of a domain from Reclaim Hosting (friends don;t let friends host with GoDaddy). Or move your site off of the upsell / lock down of WordPress.com — Reclaim will do the leg work for you.

Stop complaining about social media in social media. Get back to writing and sharing ideas, rather than link shoving.

Reclaim it All in 2017.

Start now.


Top / Featured Image: A mashup, a remix of the Monkees single. It’s totally a copyright if you believe in the limited scope of what that means. This use is not to sell (listen to how bad the music I made is), and is parody.

The post "I’m a Reclaimer" was originally assembled from spare parts of a 1957 Chevy at CogDogBlog (http://ift.tt/2hB6c3g) on December 25, 2016.

from CogDogBlog http://ift.tt/2hB6c3g
via IFTTT

Fantastic Tool for Untangling Timelines

Sure one has to wear heavy rubber boots when wading through the twitter pool these days, but dodging the crud, I still find my regular supply of gold nuggets.

I cannot even remember who shared TimeLineCurator with me, but I thank them and my serendipity curiosity that made me click. From their own description:

Want to make a visual timeline, but don’t have the time to draw one manually? Or maybe you have some documents, but you’re not sure if the events they depict form a compelling timeline?

TimeLineCurator quickly and automatically extracts temporal references in freeform text to generate a visual timeline. You can then interactively curate the events in this timeline until you are satisfied, or quickly decide that there is no interesting temporal structure within the document. You can also create a mashup of multiple documents against each other to compare their temporal structure.

Watch the overview video:

https://player.vimeo.com/video/123246662

This project from what looks like a professor, post doc, and grad student at the UBC InfoVis group does these things well:

  • It addresses a recognizable problem Making timelines with current tools is tedious. You have to comb sources for dates, enter in spreadsheets or data tables. This tool analyzes text data for dates.
  • It lives in the web Using it does not require battles with Java, installing weird plugins, or learning python. Yu can copy paste an article, but even better, it can read directly from a URL. Smart.
  • It connects to other tools You can generate a timeline as static thing, but because it’s elegant, you can create an export file to take your timeline into TimelineJS.
  • It’s not all automated Once it analyzes text, you, human, are able to analyze, modify, delete the points it found. Or add media.
  • It’s an exploration tool I like that in the video they suggest using it first to explore sources to see if it contains enough data worth processing.

I did a superficial quick test to get a sense of what it does, here is my timeline of British Rock Invaders

The source was the biographies of The Who, The Kinks, the Beatles, and the Rolling Stones, the information coming from All Music.

This is hardly worth smashing your amp over or firing your drummer. I only removed two extraneous dates, but you can fine tune every point, discard vague ones, update the titles, add media.

I like the idea of comparative timelines, but can see using it as well to create a single topic’s timeline. Like the Berlin Wall timeline, developed from the Wikipedia entry and exported to TimelineJS.

Or this example comparing The Guardian and Wired’s coverage of Net Neutrality.

I bet you can think of some great ways to use a tool that takes away the grunt work of collecting data for a timeline and makes room hopefully for the analysis and thinking about it.


Top / Featured Image: The searches for “time line” were too literal, so using Google Image Search (limited to open licensed image) I got more interesting photos using “time thread”. The colors and tangling of this one worked in my mind.

Thread pixabay image by snd63 shared under CC0 license

The post "Fantastic Tool for Untangling Timelines" was originally dropped like a smoking hot potato at CogDogBlog (http://ift.tt/2iggd5o) on December 27, 2016.

from CogDogBlog http://ift.tt/2iggd5o
via IFTTT

The Best Way to Predict the Future is to Issue a Press Release

This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology. The slides are also available here.

Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media Arts departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

Here are a couple of more recent predictions:

“In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

Pray for Harvard Business School. No. I don’t think so.

Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

“Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

People find comfort in these predictions, in these fantasies. Why?

Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

“The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

“Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

So we can reorganize the bar graph. But it’s still got problems.

The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

And that changes the graph again:

How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

  • 2006 – the phones in their pockets
  • 2007 – the phones in their pockets
  • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
  • 2009 – the phones in their pockets
  • 2010 – the phones in their pockets
  • 2011 – the phones in their pockets
  • 2012 – the phones too big for their pockets
  • 2013 – the apps on the phones too big for their pockets
  • 2015 – the phones in their pockets
  • 2016 – the phones in their pockets

This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

“65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.

from Hack Education http://ift.tt/2fF4PPI
via IFTTT

Il coefficiente di Gini

Per introdurre il lavoro di Corrado Gini, sul quale avevamo promesso il mese scorso di ritornare, ricordiamo che il 20% più ricco della popolazione mondiale consuma l’80% delle risorse del pianeta. Per esprimere in maniera più precisa e concretala grave disuguaglianza esistente oggi nella distribuzione della ricchezza del mondo bisogna però usare informazioni più dettagliate: nel 1905 l’economista statunitense Max Lorenz propose quindi, in un articolo intitolato Metodi per misurare la concentrazione della ricchezza, di considerare quelle che oggi si chiamano appunto curve di Lorenz.

Si tratta delle curve che descrivono le percentuali crescenti della ricchezza possedute da percentuali crescenti della popolazione, partendo dal basso: cioè, da chi ne possiede di meno. Queste curve cominciano e finiscono sempre allo stesso modo, a causa del fatto che lo 0% della popolazione possiede ovviamente lo 0% della ricchezza, e il 100% ne possiede invece il 100%. Per il resto, ciascuna curva differisce a seconda della regione geografica, del momento storico e del tipo di ricchezza considerata: ad esempio, se il patrimonio o il reddito, e se al lordo o al netto dalle tasse.

Ci sono due tipi estremi di curve di Lorenz. Nella direzione della completa uguaglianza, c’è la retta a 45 gradi che descrive la distribuzione perfettamente uniforme della ricchezza: quella, cioè, in cui non solo il 20% della popolazione mondiale consuma il 20% delle risorse, ma ogni percentuale della popolazione consuma la corrispondente percentuale delle risorse. Nella direzione della completa disuguaglianza, invece, c’è la curva piatta in cui nessuno possiede niente, eccetto uno che possiede tutto, e che fa schizzare la curva al suo massimo nell’ultimo punto.

Le curve di Lorenz, che nel concreto si situano sempre fra questi due estremi, misurano nel dettaglio la concentrazione della ricchezza in una data situazione, ma lo fanno al prezzo di infinite informazioni: una per ciascun valore percentuale della popolazione. Nel 1912 lo statistico italiano Corrado Gini propose dunque, nell’articolo Variabilità e mutabilità, di estrarre da ciascuna curva di Lorenz un’unica informazione cumulativa, che oggi si chiama appunto coefficiente di Gini, e si ottiene misurando la percentuale dell’area compresa tra la curva data e quella a 45 gradi, rispetto all’area compresa tra quest’ultima e la curva piatta.

Poiché si tratta di una percentuale, il coefficiente di Gini è sempre un numero compreso fra 0 e 1, che si può riportare più comodamente a un numero fra 0 e 100 moltiplicandolo per 100, appunto. E poiché esso misura quanto la corrispondente curva di Lorenz si discosta dalla completa uguaglianza nella distribuzione della ricchezza, più è grande il coefficiente e maggiore sarà la disuguaglianza, e viceversa.

Il vantaggio del coefficiente di Gini è che esso rispecchia la distribuzione della ricchezza in maniera più raffinata di quanto non facciano indicatori più rozzi quali il prodotto interno lordo di una nazione, che non dà nessuna informazione sulla distribuzione, o il reddito pro capite, che ne dà solo una rudimentale: quella statistica secondo cui, se una persona mangia un pollo e l’altra no, ne mangiano in media mezzo ciascuno.

Passando dalla teoria alla pratica, nel periodo tra il 1800 e il 2000 il coefficiente di Gini dell’intero mondo è salito da 43 a 71: dunque, negli ultimi due secoli l’incremento di ricchezza prodotto dalla Rivoluzione Industriale ha molto accresciuto il divario fra ricchi e poveri. Lo stesso effetto si è avuto negli Stati Uniti, dove il coefficiente è salito da 39 a 48 nel periodo tra il 1970 e il 2010, a causa del dimezzamento dell’aliquota massima delle tasse. In Italia, invece, l’aumento della pressione fiscale ha mantenuto nel periodo tra il 1980 e il 2005 il coefficiente del reddito netto quasi stabile, tra 31 e 34, mentre quello del reddito lordo saliva da 42 a 56: cioè, le alte tasse hanno agito da riequilibratore sociale della ricchezza.

In generale, i coefficienti di Gini permettono di classificare i vari stati del mondo in ordine decrescente di democrazia distributiva. I coefficienti sono minimi nei paesi scandinavi, e bassi in Europa, Canada e Australia. Crescono a valori medio-bassi in Russia, India e Giappone, e medio-alti in Stati Uniti, Messico e Cina. E arrivano a valori alti in Brasile, e massimi in Centrafrica e Sud Africa. A conferma del fatto che ricchezza e giustizia sociale sono cose non solo ben diverse, ma anche ben quantificabili.

(Rubrica Il matematico impertinente di Ottobre su Le Scienze)

from Il non-senso della vita http://ift.tt/2dvczQU
via IFTTT

Liberiamoci per sempre dall’umanesimo

Ennio Flaiano, ormai più di cinquant’anni fa, scriveva in un suo testo celebre, Un marziano a Roma, parole che sono state considerate profetiche: «L’evo moderno è finito. Comincia il medioevo degli specialisti. Oggi anche il cretino è specializzato». Su questo aforisma si è discusso per decenni, decenni in cui internet non esisteva ancora, e dove l’idea di una cultura specializzata sembrava essere un cammino necessario al progresso, una postilla indispensabile per generare futuro. Ma Flaiano aveva torto.

Nel 1966, sei anni dopo le parole di Flaiano, Michel Foucault, un filosofo che per tutta la vita ha studiato la nostra società, a una domanda della Quinzaine Litteraire dove gli si chiedeva se in assenza di una buona cultura generale nel futuro non ci saranno che specialisti, rispondeva in questo modo: «Non è l’uomo comune a essere condannato, ma la nostra scuola superiore, dominata dall’umanesimo. Se oggi l’uomo comune ha l’impressione di una cultura barbara, irta di cifre e sigle, questa impressione è dovuta a un unico fatto: il nostro sistema educativo risale al XIX Secolo e vi vediamo ancora regnare la più insulsa psicologia, l’umanesimo più desueto, le categorie del gusto, del cuore umano».

Joel Stein sulla rivista Time, ha scritto un lungo articolo di copertina sull’odio in internet e sui troll, coloro che interagiscono sul web attraverso messaggi provocatori, irritanti, fuori tema o semplicemente senza senso, con l’obiettivo di generare confusione e incarognire gli animi. L’idea che il mondo stia diventando sempre più aggressivo, violento, che i disturbatori sul web si stiano moltiplicando, protetti dall’anomimato, capaci vigliaccamente di insultare chiunque, fomentando odio, specie odio razziale, è stato spiegato da Foucault, già 50 anni fa, con grande nitidezza.

Flaiano vedeva l’arrivo di un medioevo, un medioevo dove sempre più persone avrebbero saputo una cosa soltanto, magari benissimo, senza capire altro, e temeva la teoria e la tecnica. Foucault pensava il contrario, proprio l’idea di una cultura che abbraccia tutto, che tutto spiega, che si pone i problemi del rapporto tra uomo e mondo, quello della realtà, della creazione artistica, della felicità, è un’idea sbagliata e «il nostro compito», aggiungeva, «è liberarci definitivamente dall’umanesimo».

Joel Stein  su Time non parla soltanto dei provocatori sistematici, ma anche di quel clima che definiremmo fortemente maleducato che impera nel mondo libero e aperto di internet. Dove tutto è permesso, certo, anche troppo. Ma per capire il motivo di tanta aggressività, serve ripensare l’idea che abbiamo dell’umanesimo, e la retorica dell’umanesimo.

È proprio il continuare a perseguire l’idea di una cultura non sistematica, antiscientifica, persino astratta a creare problemi. Sono proprio le idee lontane dalle teorizzazioni scientifiche e tecniche che generano questo diluvio di aggressività. Foucault avrebbe aggiunto che è un problema di linguaggio. Il linguaggio dell’umanesimo, nel suo rifarsi al “cuore umano” genera radicalizzazione e mostri. Il linguaggio della scienza e della tecnica porta invece a sistematizzare i problemi, renderli comprensibili e razionali.

Eppure tutti pensano il contrario. Anzi si è visto nel web 2.0 una sorta di nuovo umanesimo, e l’idea che tutti possiamo dire, esprimere e capire, attraverso quella che Foucault definisce la più «insulsa psicologia» e il «degrado del gusto collettivo», è una cosa desueta e tremenda. Il resto è conseguenza. Il resto è linguaggio inadeguato, sensazione che tutto sia sempre e comunque possibile, e che sia giustificabile l’odio. Il resto è giudicare senza teoria, senza competenze, ma con i suggerimenti dell’animo si arriva anche al diritto di odiare perché anche l’odio è una tragica forma di espressione, un modo di giudicare, un radicalismo necessario, quando è ritenuto necessario.

Per Foucault, ripensare l’umanesimo, superarlo, fu una tesi provocatoria, che capovolgeva molti luoghi comuni su scienza e pensiero, luoghi comuni in cui ci muoviamo ancora oggi, ma è figlia di un pensiero lucido e sempre più necessario.

© Corriere della Sera – Tutti i diritti riservati

from Roberto Cotroneo http://ift.tt/2d7oqnz
via IFTTT

It’s more than just “teach kids to code”

It’s more than just “teach kids to code”

I’m skeptical about “teach the kids to code!” as a panacea for all of society’s ills. Yet today, I’m at the White House to participate in a summit on Computer Science for All. Why would a skeptic still think it’s important to make computer science part of everyone’s education?


It’s almost impossible to overstate the breadth of cultural enthusiasm for the idea of teaching kids about computer science and computing. No matter where they sit on the political spectrum, leaders will proudly tout America’s high tech companies as the future of innovation and high tech companies as the future of opportunity and employment. Tech has become something of a secular religion in America, and as a result there’s been a rush toward enthusiastically advocating for technology education, without as much substantive and nuanced critique as the idea deserves.

The Myth of Perfect Tech Jobs

As someone who’s been making software and Internet technologies for 20 years, I’m skeptical about the enthusiasm that so many in the policy-making world have for saying, “let’s teach kids to code!” To start with the obvious elephant in the room, many of the people advocating for these programs aren’t particularly knowledgeable about technology, or the economics of today’s tech startups, in the first place. (Most people making policy haven’t yet realized that there is no “technology industry”.) And most of the technologists advocating for these programs aren’t particularly literate in how today’s educational systems work, or what constraints they face.

But my skepticism starts at a lot more fundamental level than the literacy gap between policy, tech and education. Even though I do know how to code and I do love technology, I am intimately aware of the weaknesses of many of the signature companies that define tech culture, and those are the biggest concerns we need to address.


Many tech companies are still terrible at inclusion in their hiring, a weakness which is even more unacceptable given the diversity of the younger generations we’re educating today. Many of the biggest, most prominent companies in Silicon Valley—including giants like Apple and Google—have illegally colluded against their employees to depress wages, so even employees who do get past the exclusionary hiring processes won’t necessarily end up in an environment where they’ll be paid fairly or have equal opportunity to advance. If the effort to educate many more programmers succeeds, simple math tells us that a massive increase in the number of people qualified to work on technology would only drive down today’s high wages and outrageously generous benefits. (Say goodbye to the free massages!)

And at a more philosophical level, a proper public education, paid for by taxpayers, shouldn’t be oriented toward simply providing workers for a group of some of the wealthiest, most powerful companies to have ever existed.

That’s a pretty damning case against teaching kids to code? So why would somebody still favor the massive investment and cultural shift required to pull it off? Well, it’s the oldest excuse in the political realm, but we have to think about the children.

Going beyond CS

There’s a much more powerful vision of “computer science for all” that can address all of the concerns raised by the current state of technology and tech companies. Technology literacy, and a strong basis in computer science, can be a powerful way to empower the most marginalized, most needy people in society.

We simply have to commit to some broad principles about how we teach CS:

  • Teaching computational thinking: Aside from simply teaching how programming works, we need to ensure that young people can understand the way that human concerns are translated into problems that computers can help solve. Like media literacy or general critical thinking skills, we should provide this information as a necessary part of teaching students to understand the systems that run the world around them. It’s essential that concerns like ethics and systemic biases be incorporated into any education about technology systems.
  • Applied CS over theory: A lot of yesterday’s computer science programs emphasized abstract concepts that could often be hard to translate into practical impact. Given that more students have access to technology in their everyday lives than ever before, recontextualizing CS education to connect directly to the tools and devices they already use can ensure that what we’re teaching is relevant. By analogy, we’re going to need a lot more electricians than electrical engineers, even if we know that the two related disciplines are both important and valuable.
  • Jobs in every industry, not tech startups: While we shouldn’t add to curriculum simply to satisfy the demands of industry, it’s reasonable to want to make sure education can translate into real-world jobs. The vast majority of technology jobs, both today and in the future, are outside of the signature startups and tech titans of Silicon Valley, in technical roles in companies that are otherwise not seen as being primarily in “tech”. These jobs may not have the high profile of Google or Facebook, but companies with a longer track record are likely to be stable, more geographically distributed, and aligned with the career and life goals of a broader swath of the population. We can de-emphasize the high-risk startup style of tech employment in favor of a much more accommodating style that could be described as blue-collar coding.
  • It’s not about making more programmers: While a lot of young people who learn about computer science may choose to go into programming or engineering or related disciplines, we should not design curricula with the goal of turning everyone into a coder. Every industry, every creative discipline, every line of work from farming to fashion, engineering to english, management to marketing, can be improved by including insights provided by being deeply technologically literate. It’s possible to teach computer science in a way where it amplifies the interests and ambitions that young people have in any discipline, and unlocks their full potential in whatever field they find meaningful.

Being literate in technology and computer science has opened up an unimaginable set of lifelong opportunities for me. From meeting friends, to having a fulfilling career, to getting to speak at the White House again today, I’ve been incredibly fortunate. And I want as many people as possible to enjoy the same potential for new opportunities and a meaningful, fulfilling career.

As we commit to broadly teaching technology, we must do a better job of addressing all of the personal, social, cultural, and civic concerns that arise with technology’s transformation of our society. Teaching CS as simply a way of filling a pipeline of employees for giant high-tech companies is not enough. Indeed, if that’s all we succeed in doing, we’ll have failed. But if we can show a whole generation of young people that technology and computer science can be one of the tools they use to pursue their passions, and amplify their impact on the world, we’ll have made a worthy addition to the canon of material that students use as a basis for their life’s work. It’ll take years of concerted, continuous effort. So let’s get started.

from Anil Dash http://ift.tt/2cNeqiN
via IFTTT