2016-03-11+18.11.59+copy+2.jpg

Twitter Rumoring Published Research

Twitter Rumoring – Published Research

Twitter Rumoring

Published Research

 

Overview


Our UW directed research group, led by Dr. Kate Starbird, studied how a certain rumor spread in the 2015 Paris attacks. After a process of interview, coding and analysis, we published our findings in CSCW '17.

My Contribution


Tweet Coding, Semi-Structured Interviews, Grounded Theory, Transcription, Affinity Diagramming, Collaborative Coding, Theoretical Analysis and Writing, Editing for Journal Publication.

 

 

What's Interesting?

 

Rumor Verification & Blame ("Locus of Responsibility")

In our analysis, I took particular interest in the idea of blame, or locus of responsibility.  When a participant learned that they had tweeted a false rumor, did they experience a sense of guilt? Did they take ownership for their mistake or did they try to shift blame in some way?

Grounded in our interviews, I drafted a conceptual framework for "blame". With team collaboration, we broadened this idea of "locus of responsibility" to include six strategies of responsibility ownership or deflection. I wrote a detailed outline of this section of our article, describing our theoretical constructs, proposing verbiage, offering supporting quotes, and more. Kate Starbird leveraged my outline to create the final section for publication, citing "Many thanks to Paul, who had a fantastic outline that was very easy to build off of!" In the end, I provided editorial tweaks prior to final CWCW '17 publication.  

 

Pre-Interview Preparation

tweet Encoding

At first word of the Paris shooting, the team activated monitoring scripts set to capture tweets with various key words such as "shooting" and "Paris attacks". On close review after the event, a widely tweeted rumor was identified, in which many tweeted or corrected a rumor about an attack happening in the Les Halles shopping center. In fact this shopping center was never a shooting target.

The subset of "Les Halles" tweets were extracted, but required human review to determine and encode whether each tweet affirmed or denied the rumor. Some tweets were discarded as neutral, uncodable, or unrelated. Team members calibrated their evaluation criteria, performed practice encodings, and then two members evaluated each set of tweets. Disputes were adjudicated by a third team member.

With the set of encoded tweets evaluated, further scripting determined if the same user tweeted multiple times, and if they deleted any tweets, and thus could establish interesting and unique patterns, such as:

Affirm
Only

Single or multiple affirmations – "a shooting in Les Halles shopping center in Paris"

Deny
Only

Single or multiple denials – “Les Halles had no shootings”

Affirm then
Deny or Delete

User self-corrected – “I posted earlier that there was a shooting, but I was mistaken.”

Multiple Affirm,
Deny, or Delete

Users may spread the rumor, then correct, then may affirm or deny again.

Participant Selection

Participants were selected from each of the tweet pattern groups noted above, to provide a spectrum of behavior types and possible personas. A process of friendly tweeting, connecting, and personal messaging was followed to reach out to candidates in these profiles, and those who responded were offered an opportunity to be interviewed, with a small gratuity offered.


Interviews

Interview Guide

I helped set up our interview guide for easy note taking by the facilitator and and other supporting interviewers. This enabled easy cataloging and consolidation of notes after each participant's interview, and incorporation of more formal transcription updates, with time stamps, key quotes and more.

Semi-Structured Interviews

The team rotated roles of Lead and Secondary Interviewer, with all of us taking notes. The semi-structured guide provided a framework in which we could learn about the participant and their self-perception:

  • How do you typically use Twitter?

  • Who are your followers?

We then asked a series of questions to delve into their behavior during the crisis event, itself:

  • Where were you when you learned about the Paris attacks?

  • Do you have an emotional connection with the event (or Les Halles)?

  • How closely were you following the event?

  • How did you use social media during that time to seek and/or share information?

We then transitioned into questions about their specific tweets. Prior to the meeting we had assembled their tweets and sent them to them so that they could reflect on each individual tweet in sequence. A different set of questions was available based on whether each tweet was an Affirm or Deny or Delete:

  • Can you talk about this tweet?

  • At the time were you concerned about whether it was true or not?

  • Did you do anything to. correct this information after you found out it was false?

Throughout each interview, we used grounded theory methods, in which insights the participant provided might lead us to explore new areas previously unexpected. For example, we started identifying a "journalist" behavior during one of the interviews, and were able to divert our questions more deeply in that area.

Transcription

Recordings of each interview were reviewed and transcribed. We chose to do exact time-stamped transcriptions rather than rely solely on approximate notes taken. This allowed us to have greater accuracy, and we could more easily incorporate statements in our activity diagramming sessions, and leverage quotes later in our process.

 

Affinity Diagramming

Transcript Cards

With full transcripts already in hand, I devised an idea to use scripts to generate printed cards for each unique quote or thought cluster. Team members could parse out a copy of each transcript and put a hard-return after each unique item they wanted on a card. Running the script generated a grid which could be cut into sticky-note sized cards which were readable at a distance so they could be taped or laid out in our affinity diagramming session. Team members could also manually create higher level concepts or "themes" in this process, too.

Large Scale Affinity Diagramming

With such an information-rich topic, we dedicated 3 days and the walls and tables of two rooms to this exercise. We discovered numerous interesting clusters, with enough insights and areas of theoretical exploration for multiple journal articles. As noted earlier, I took special interest in the locus of responsibility, but I also investigated modes of trusting or verifying information prior to tweeting. Other areas surfaced, such as modes of self-correction, the impact of deleting tweets versus posting a correcting post, and differences between types of users such as journalists versus "helpers" who want to amplify a distress signal.

 

Collaborative Coding

After completion our affinity diagramming sessions, we triangulated our findings using a second approach. As a team, we aligned on codes for the ideas with greatest merit for our journal article. For instance I previously described how we identified "Locus of Responsibility" as a topic of interest, and within that we had at least 4 to 6 unique variations.

 
 

By loading our source transcripts and these codes into saturateapp.com, we could independently read through each transcript and tag sentences or ideas of interest. Over the course of a week, with each of us working asynchronously, patterns emerged within the documents, and we were able to generate reports showing the clusters of quotes and comments for each tag and sub-tag.

 

Publication

For each topic within our journal article, individuals explored clusters of ideas, wrote up theories, supported by quotes, and generally followed the process I described for my "locus of responsibility" topic. Kate Starbird and our two lead PhD candidates incorporated our suggested content and their own, writing the first drafts of our document. Iteratively, we reviewed, commented and improved the document until it was publication-worthy. It was finally accepted and published in the Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 

>> A Closer Look at the Self-Correcting Crowd

 
 

 

Reflections

 
 

Interviews: 

  • The semi-structured format was effective, keeping us on track but with some pre-planned branching logic.

  • It was important to stay flexible, so that we might be able to explore new information and ideas as they arose. Any team member could gently interject follow-up or add-on questions to help explore.

  • Pre-call alignment was very important for better execution of our interviews. Selecting team roles, reviewing participant tweets, and exploring more of the participant's twitter history really helped frame things more solidly.

Affinity Diagramming:

  • The transcript cards really made affinity diagramming easier and reduced hours of hand-writing labor.

  • With about 15 one-hour interviews transcribed, there was a lot of repetition and material when taken that granularly. Some consolidation would have strengthened this exercise.

  • The "Theme" cards turned out to be vital organizationally. The themes we anticipated were not always used, though, and hand-writing themes as clusters revealed them was more effective.

Collaborative Coding:

  • The saturateapp.com tool was very effective for asynchronous group coding.

  • This activity felt slightly redundant after the level of detail we had put into our affinity diagramming.

  • I could see some hybrid, where we put less effort into one or the other, or perhaps attempt with just the coding in the future.

  • I definitely respect triangulation though, so for CSCW-level journal articles, I believe the extra effort was not lost.

Files and Deliverables:

 
 

Copyright © Paul Townsend