You have /5 articles left.
Sign up for a free account or log in.

Recently, there’s been considerable interest in how academics can evaluate the impact of social media outputs. A recent article, titled “Who Gives A Tweet? Evaluating Microblog Content Value” [PDF]  and signed by Paul André, Michael S. Bernstein and Kurt Luther, shares the results of a study which involved the creation of an online tool, titled “Who Gives ATweet?” (WGAT). This online tool encouraged and enabled users to voluntarily rate the “value” of tweets. Using a corpus of approximately 43,000 ratings, the authors asked:  “What content do Twitter users value? For example, do users value personal updates while disliking opinions?” and “Why are some tweets valued more than others?”

Though the tool was developed by academic researchers within higher education institutions (WGAT is hosted at the MIT), the study involved general users (not only academics) and therefore discusses all types of tweets, not only what could be called “academic tweets” (as stated on the title of this post). I am interested in making this distinction because in order to discuss how to evaluate (or “measure”) the impact of academic content shared on social media (or academic activity on social media), we would need to focus specifically on how academics  make sense of social media content. My gut feeling is that as academics we evaluate the quality of academic social media content through the same set of basic interpretive skills we employ to evaluate anything else we read.

Not everyone agrees on what an academic tweet is, but I would like to suggest it means something more specific than a tweet posted by someone who happens to work in an educational institution. Academics have all types of conversations online and offline; even those conversations that could be labeled as “academic” take place under different contexts; they have different themes and approaches, nuances, agendas, etc. These different types of conversation fulfill different functions and interpreters evaluate them accordingly. Though it could be said some of these functions are essentially social, they do contribute as catalysts of academic work (phatic communication, if you will). Therefore, it would be difficult to agree on what could or should be considered of strict academic value, but we might need to say that words, conversations or data should be evaluated in specific contexts, and therefore qualitatively.  WGAT focuses on individual tweets as decontextualised units, asks users to categorise them according to pre-established value judgements, and makes generalisations from these individual qualifications. I find this troublesome for the academic evaluation of social media content, and I will try to explain why.

Twitter is a public and asynchronous medium. Because it is non-linear and distributed, pieces of information are received by very different people in very different times and places. Twitter de facto decontextualises information in the shape of the individual tweet, and though the individual tweet is Twitter’s most basic technical unit, meaning and interpretation, and indeed Twitter’s full capabilities, are only actualised when connections are made between these single units (a tweet is always part of a bigger conversation a specific user may or may not be aware of).  A the same time, Twitter enables re-contextualisation by encouraging further research so users can get the complete picture. This means that when taken on themselves, tweets as isolated units offer a particular “value” that often (if not always) require recontextualisation in order to be fully appreciated. I consider this distinction important, not the least because academics, when in “strictly professional mode”, often appreciate very different types of information (or appreciate differently) what the general public would generally prefer, or what the general public version of their academic selves would publicly accept preferring.

I am unavoidably attracted to tools developed using the Twitter API, and I am convinced that very interesting conclusions can be drawn from their development, use and data they provide. Nevertheless I have serious doubts that we would need, as the authors write, “technological intervention: design implications to make the most of what is valued, or reduce or repurpose what is not”, especially when the judgement criteria is so inherently subjective and context-specific. When is critique “whining”? When is geolocation data useful, and when is it “boring”? Maybe millions of users have already read that link, but what about the other potential timelines with users who are not connected all the time?

The concern I have with these “approaches [with] the potential to address issues of value and audience reaction” (here we can include Topsy, the service used by the altmetrics tool) is that they resemble too much what is done in market sentiment research. When I worked on the market sentiment research sector, I discovered that the ways to classify audience reaction avoided the complex qualitative analysis one expects from university research. In my experience, the categories used to classify reactions to content did not always enable nuance, and aimed for pragmatic, market-driven graphic presentation, rather than the reasoned argumentation traditionally used in academic field studies or literature review.

Sentiment research might be well-suited to survey public opinions about, say, a new soda, but if we are interested in discussing academic or scholarly uses of social media, a similar conceptual framework, based on the surveying and generalisation of public opinion, seems to me constraining and even counter-productive. A given user (or millions of them if you will) may think a certain tweet is boring or devoid of value, but that same tweet may be of great interest for a different type of user: one who cares precisely about that which others find uninteresting.

Social media services like Twitter and Facebook tend to have a way of self-regulating. Eventually, most semi-capable users can become fluent in new social networking platforms and can even learn good practices by mere trial and error. Without a doubt, there is at the same time a real need to discuss and establish guidelines and policies for institutional social media good practice, and for the inclusion of social media literacy education in school curricula.

And yet I am troubled by the suggestion that an automated crowdsourced rating system would be necessary to perform a basic interpretive task. What kind of collections would academic libraries have if only those books the crowd thinks to be the most popular were considered of any value? I love technology, but I also want to believe academics are still perfectly prepared to decide, without the need of “technological intervention”, who gives a tweet about what, and, most importantly, why.

 

Next Story

Written By

More from University of Venus