You have /5 articles left.
Sign up for a free account or log in.

In June 2016, a Facebook executive declared that the future of publishing on the web was in video.

They were seeing it in their data. Nicola Mendelson, vice president for Facebook’s European operations, explained the reasoning this way:

“The best way to tell stories, in this world where so much information is coming at us, actually is video. It commands so much more information in a much quicker period. So actually, the trend helps us to digest more of the information, in a quicker way.”

If you read that in isolation, it doesn’t make any sense, since it really is the opposite – we can take in much more information more quickly through text than video – but never mind, Facebook’s data was showing a decline in engagement with text, an increase in engagement with video.

“Pivot-to-video” was launched. As Laura Hazard Owen documents in an article at NiemanLab about Facebook’s declaration of the impending pre-eminence of video and the subsequent response, numerous online media companies (Vice, MTV, Bleacher Report, Mashable, etc…) reacted in the wake of the Facebook declaration, many immediately shifted away from text, firing writers, hiring video producers, and investing in the brave new world of video. 

Long story short, turns out that Facebook’s data was farkate, and people weren’t really investing their precious attention in video. It was closer to the opposite. (It’s alleged that Facebook may have inflated the “average duration of video viewed” by over 700%.) Whether or not Facebook was aware of their flawed data when they communicated this knowledge to the public is the subject of a lawsuit, which is what produced these revelations.

We’ll see how the lawsuit turns out, but what interested me is the role of the use and misuse of data, and what happens when a corporation like Facebook controls so much data and commands so much of the larger ecosystem, that others feel as though they have no choice but to react to what Facebook is communicating about the world.

Many probably heard the news about Amazon “scrapping” an AI-driven hiring tool because it systemically discriminated against women. The goal, as reported by Reuters was to develop, “an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” 

Having been “trained” on data by observing patterns in successful resumes over a 10-year span a period dominated by male applicants, the algorithm “learned” to discriminate against female candidates, literally penalizing resumes that included the word,  “’women’s’ as in ‘women’s chess club.’”

Even altering the algorithm to not exact this penalty couldn’t guarantee that some less obvious bias wasn’t being reinforced by the process.[1]

Both the Amazon and Facebook stories point toward a kind of magical faith in AI and data. Amazon’s project was doomed from the start, and a more thoughtful consideration of the issues concerning how we hire people would’ve led to this realization.

With Facebook, as Laura Hazard Owen points out, it turns out that we had contemporaneous, publicly available data that put the lie to what Facebook was peddling. A Pew survey found that Americans under 50 preferred text to video, primarily for its efficiency and ease of accessing information.

People were skeptical about pivot-to-video, but they allowed the power and influence of Facebook to overwhelm their skepticism. That’s a problem, not just in terms of the underlying epistemology, but in what it means when companies like Facebook control the data.

In a world where every mouse click and keystroke is tracked and turned into numbers to be massaged by algorithms, we’re going to have learn to be considerably more skeptical of data and considerably more vigilant when it comes to requiring transparency from those who are profiting from that data.

This is particularly true in education where we are raising a generation of students whose every move inside and outside of the classroom are being tracked. Personalized learning software, which underpins one of Mark Zuckerberg’s forays into education, is designed in such a way to both draw the boundaries around what it’s meaningful to learn, and is the determiner of whether or not students are “learning.”

Emote is an app that goes beyond academic performance to the explicit monitoring of student emotions and moods as determined by school personnel logging their observations of students.  The hottest part of the ed-tech marketplace is essentially in various applications of spyware.[2]

Colleges provide “guided pathways” and algorithmic advising based on previous outcomes of other students, the same methodology as Amazon’s abortive resume screener.

How certain are we that there isn’t a pivot-to-video like problem lurking in some these ed tech initiatives? How would we even know to go looking?

Facebook could peddle a bogus pivot-to-video pitch because Facebook exerts total control over their data and uses that control to dominate information distribution on the internet.

What are the potential consequences of establishing this kind of ecosystem in education? I don’t remember a robust discussion about whether or not it’s a good idea to track students in every dimension imaginable and yet we’re well on the way to a world where it’s done as a matter of course, allegedly for students’ own well-being.

Individuals will never be able to challenge the tech behemoths when it comes to the generating of data. The idea that we could fight it out, data v. data with a Facebook or Google or Amazon is fanciful.

But what we all do have is judgment, judgment which can be employed when the data just doesn’t seem to add up.

Our first exercise of judgment may be to stop allowing so much of ourselves to be turned over to algorithms in the first place.

 

[1] Fortunately, according to Amazon, the tool was never used to evaluate candidates.

[2] Recently on Twitter, Chris Gilliard, a scholar of digital technology and privacy listed a collection of these current speculative forays into ed-tech

Next Story

Written By