Discourse Analysis of Internet Trolls?: the whys and hows of analyzing online content*

by on 2014-06-02 in Duck- 3 Comments

Online mediums can be perceived as attracting wacky ranters unrepresentative contributors and exchanges and, therefore, forums or chats are often treated as if they do not provide an effective picture/sample of political discourse. But since over 80% of Americans are online, 66% of American adults have engaged in civil or political activities with social media, and about half of those who visit discussion groups post/contribute, isn’t this an interesting- and increasingly relevant- medium for a discourse analysis? Why cut out such a vast political resource? What is different about ‘doing’ a discourse analysis of online content? How would you even start such an analysis? And, why aren’t those like myself- who blog and engage in political discussions as part of my daily/weekly activity- doing more to treat online content as part of what we consider to be ‘legitimate’ political discourse? Well, I think it comes down to methodology. Here is a very brief intro to some of the opportunities and challenges to conducing a discourse analysis of online content (PS getting students to do such an analysis is a great assignment).

1. What makes a discourse analysis of online content different from an analysis of printed text?
First, (and probably somewhat obvious) online material uses multiple modes of expression, including emoticons, hyperlinks, images, video, moving images (gifs), graphic design, and color. This multimodality adds complexity (and, I argue, richness) to a discourse analysis- but the researcher must be aware of how particular signals are used, (for example, ‘iconic’ or popular memes or gifs (like feminist Ryan Gosling or the Hilary Clinton texting image begin to take on particular meanings themselves). Second, online content is unstable, instant, and edited in ways unavailable to print (even the use of striking through signals ‘editing’/alternative meaning/irony etc- but this requires interpretation). Also, articles, conversations, and posts, can be published, responded to, retweeted, then retracted or edited all within a few hours. Readers are not always aware of the editing process (nor of the editing that may take place vis a vis website moderators). This means it is difficult to get a fixed ‘picture’ of public discussion on a particular topic. Third, when users choose to be anonymous, user names or avatars can sometimes signal meaning. Maria Sindoni points out that one’s avatar can signal location or ideological position (ie DeepSouthPopulist, FeministFlyGirl, MarineVet2014).

2. How do you ‘do’ a discourse analysis of online content?
Conducing a discourse analysis of online content doesn’t need to be that different from other types of discourse analysis. One needs to establish parameters, identify themes within the content, and conduct an analysis, using representative examples to help illustrate the conclusions. However, establishing parameters can be tricky (and daunting)- one could choose popular/frequently shared posts/articles on a particular topic; analyze posts or comments within a particular time frame; or even focus on one ‘iconic’ article or post and the discussion that ensued (Foreign Policy’s feature ‘Why do They Hate Us‘ is a great possibility). Basically, as with any discourse analysis, the key it to be clear and consistent on establishing the sources. But even after establishing parameters, there can be some further challenges. For example, some posts garner hundreds of comments. How is it possible to analyze these? This leads to another methodological question:

3. What do you do with comments like “you are a d*ckhe@d”, which are more frequent that you might think?
There is no getting around it- doing a discourse analysis of online content involves sifting through troll droppings and verbal insults in order to filter through the ‘actual’ political contributions. And conducting such a filter is not necessarily clear or objective. Researchers need to make common sense decisions about what should simply be removed from the selection (links to porn, posts that cover an entirely separate topic, etc).

Are you convinced? Have you used online content as part of a discourse analysis?

*For all you grammar fans (I know how much you love my posts)- here is the debate on using apostrophes with ‘whys’ and ‘hows’.

Print article

  • https://twitter.com/rhyscrilley Rhys Crilley

    Hi Megan, this is an awesome overview of many issues that I’ve encountered with my PhD research! Which, FYI, looks at how the British Army use images and narratives on Facebook to claim legitimacy.

    In regards to the second point about actually doing online discourse analysis, I’ve found it really tricky to establish parameters. In terms of what I’m looking at, in a three month period there’s over 160 posts made by the official page with over 9000 comments made by other individuals, and as such I’ve tried to use certain posts which seem to be exemplary of certain narratives. The thing I’m worried about at the minute is that by focusing on key (or iconic) content, is that I’m neglecting something special – I can’t quite put my finger on it – to do with the importance of the vernacularity and sheer amount of content that we’re faced with online. The big data style text analytical software programmes might be useful here but for a start they miss out a lot of the richness (the images, the emojis, the gifs etc) of online content, and I’m dubious that you can reduce the online world to language. Do you think these sorts of programmes can work with, or alongside, online discourse analysis?

    In regards to your third point, I kind of agree, but I think there’s something political about trolling, abusive posts and even spam. Maybe it’s just related to what I’m looking at but I’ve interpreted the trolling, abuse, racism and misogyny that i’ve come across as political and important. I guess at the end of the day it’s down to context and as researchers doing online research we need to immerse ourselves in the worlds we’re looking at and be reflexive in using our ‘common sense’ in determining what’s relevant and what isn’t.

  • Megan H. MacKenzie

    Thanks Rhys! Your work sounds really interesting. I think it is possible to use software alongside discourse analysis- or, even to use a more formal textual analysis alongside discourse. I’ve just finished an analysis of the comments on one article (there were 590!). We started by doing more of a textual analysis, just to sort through the text- so, for example, we looked for the use of ‘physical standards’ ‘cohesion’ and some other key phrases relevant to my research. I think it helped narrow the field a bit. I agree that ‘trolling’ is political- and I think it could be useful to see which types of comments get harsh/troll reactions. I saw some piece on internet freedom on Democracy Now a while back where it was suggested that female contributors get more ‘harsh’ comments (I can’t remember who it was!). Anyway, I think as long as you clearly outline your methods, and acknowledge the limitations, you can make those choices.

  • Steven

    Hi – this is a great post, and is something I am dealing with right now. I’m actually doing a paper for one of my PhD classes that is examining use of certain expletives in article comments, and my professor has charged with me looking at theorists who interpret written/internet data. Any recommendations?