To install StudyMoose App tap and then “Add to Home Screen”
Save to my list
Remove from my list
The coder input was collected via SogoSurvey.com in a 2-page survey with 160 questions. Each coder was given a private link to the survey which included all the coding definitions and information specified in the protocol definitions section.
The judges were greeted with this introduction at the top of the survey:
I'm asking you to rate the headlines, all taken from CBC within 1-2 months of the arrest, based on their support for the Western position or the Chinese position.
More details on the rating are explained below. You will be asked to rate the headlines as neutral, pro-cheese, or pro-western.
Meng Wanzhou, CFO of Huawei, was arrested by Canada to fulfill their extradition treaty requirements with the USA. A subsidiary of Huawei was trading with Iran, secretly, violating US sanctions.
Shortly after the arrest, Trump openly stated she could be used as a bargaining chip.
The following backstory was provided in the instructions because several headlines discussed the firing of Ambassador McCallum and we anticipated that this particular twist of the story would not be well known to our coders.
A key piece of backstory is that during the early days of the arrest, Canada's ambassador to China made several statements in support of Meng Wanzhou & the Chinese Position and was promptly fired.
Ambassador McCallum: "Canada's man in Beijing", was criticized & finally fired for defending Meng Wanzhou.
This introduction was followed by the specific examples & information contained in the Protocol Definitions section.
After reading the introduction & coding guidelines the judges took the survey, a 2 page 160 question multiple-choice document, and given the option of selecting Pro-Western, Pro-Chinese, or Neutral for each headline.
It was mandatory to select a code for every headline.
Judges were selected to provide an equal balance. 5 Canadians & 5 Chinese were self-selected from among my politically active acquaintances on the Chinese social media platform WeChat.
Because they were selected from a Chinese social media platform, either living in China or previously residents of China, the Canadians chosen can be considered reasonably familiar with China and the Chinese perspective. One of the Canadian judges had previously participated in my last failed attempt at coding the headlines.
The Chinese chosen were reasonably politically active on WeChat. They were self-selected from Chinese groups that typically advocate a strong pro-China viewpoint. One of the Chinese judges had previously participated in my last failed attempt at coding the headlines.
The judges were given a page-long set of instructions, with examples for each category. This has been reproduced in the protocol definitions & protocol procedures category.
This was the second attempt at coding the160 headlines. Originally, I had chosen very complicated coding criteria that produced completely unusable results. Three of the judges used for this coding session also participated in the first coding session, on the exact same entire set of headlines. The first coding session was on the same theme but with far too many categories.
We started off explaining the coding definitions with neutral to avoid bias and to clarify that some neutral sounding headlines could easily be interpreted as biased. This was done to combat Selective reporting; a typical form of media bias where certain facts, figures and stories are presented in a seemingly objective way, but can easily end up biasing the readers if only one side of the story is elected for coverage.
Here's an example of a true neutral headline:
'Meng Wanzhou arrested'.
Beware of leading questions that present a position without taking ownership of it. Don't hesitate to mark a question like How Dangerous is Huawei? as pro-Western, if you feel it evokes negative feelings towards the Chinese position.
Simple factual headlines Neutral headlines must simply state the facts, BUT please don't interpret any headline that simply states the facts as a neutral headline. Here's an example of a headline that is simply factual but should be rated as pro-Western:
Huawei accused of spying by American authorities.
Although it could simply be taken as a factual commentary on American political remarks, in fact, the headline helps amplify their remarks and can leave a lasting impression in the audience of the reader.
The Chinese embassy in Canada said, "At the request of the U.S. side, the Canadian side arrested a Chinese citizen not violating any American or Canadian law".
These headlines could present a positive image of Huawei, Meng Wanzhou or Chinese entities, or they could present a negative image of Canada & Western entities who took action against Meng Wanzhou. These headlines could imply that Meng Wanzhou should be freed, that she's innocent, that the arrest & extradition were politically motivated. The headlines could also place guilt on Canada for the arrest.
Canada's Justice Minister said She is sought for extradition by the United States.
They should be headlines supporting the arrest, accusing Huawei of spying, implying guilt for trading with Iran. They could evoke Yellow Peril imagery of a rising China that the West should fear. They could also imply that Canada is innocent, and doesn't deserve responsibility for the action against China. They could imply guilt for actions. Canada's arrest was unavoidable, apolitical & entirely done because of their willingness to follow international law.
Here's what the survey looks like on SogoSurvey.com:
The survey has now been published which means it's open to the public. This was primarily done to enable readers of this thesis to check the methodology for themselves. The survey can now be accessed and taken at
The main hypothesis of the thesis states that the headline coverage of CBC during months after the Meng Wanzhou (Huawei CFO) arrest will be slanted Pro-Western. The coding analysis demonstrated this quite consistently. Two coders were removed for being heavily uncorrelated with the rest & not following coding guidelines. More details & justification in Section 6: Criticisms.
There were 4.08 Pro-Western headlines to every 1 Pro-Chinese headline. 26.8% of headlines were coded Pro-Western vs 6.56% as Pro-Chinese. These results prove the thesis, although the further examination is warranted given the somewhat disappointing headline by headline coding results (Section 6: Criticisms).
The 160 headlines were a complete sample of Huawei & Meng Wanzhou coverage during the time period and as such don't need to be extrapolated to a larger sample.
The results were tested via a t-test against the hypothesis that the population mean, ie. the total difference between pro-Western & pro-Chinese headlines, would be 0.
This was expected according to the hypothesis. CBC Pro-Western bias has been proven, demonstrated and shown statistically significant.
The core of the experts analysis was highlighting all 8 headlines that included the word, expert.
When using 8 coders, in the 8 headlines where CBC cites an anonymous expert, the total numbers:
The 8 coder base data set is: 26.80% pro-Western 66.64% neutral and 6.56% pro-China. A t-test shows that even against the pro-Western biased main data set, the experts' biased set is also statistically significant.
The experts were further analyzed by reading the article, highlighting relevant quotes and checking their background to assign a personal rating to each expert. With just one coder, the result was 6 Pro-Western / Anti-China and 2 Neutral articles. (See Appendix 7.4: Expert Summary for the experts' background information & relevant quotes)
These two results fit with our hypothesis that experts (sources) in headlines would skew heavily Pro-Western. The surprise was that they skewed much more Pro-Western than even the biased baseline dataset.
Much time was spent during this thesis trying to improve the reliability of the main dataset. The main data set of 160 headlines coded individually still only reached a Krippendorf's Alpha of .298 and an average pairwise agreement of 63.28% (still a large improvement over the 10 coder sample: see Appendix 7.3)
After data have been generated, reliability may be improved by discarding unreliable distinctions, recoding or lumping categories or dropping variables that do not meet the criterion adopted in (iii). (krippendorf)
This was first accomplished by removing the 2 heavily uncorrelated coders. Pairwise agreement & reliability scores also spiked significantly when breaking the data down into X / other.
Then the data was coded numerically instead of nominally, on a scale of Pro-Chinese = 0, Neutral = 44, Pro-Western = 88. This scale was chosen to polarize the two ends and maximize reliability when analyzed as interval data.
We can see that the Krippendorf's alpha for interval & ordinal data is around .3 versus a .235 for the unweighted nominal set.
But the real reliability breakthrough came after comparing each coder's results in total, instead of headline by headline.
Breaking down our results into a simple sum total of each category produces reliability figures well within acceptable ranges. If we take each coder as simply judging the entire 160 headlines set one time, ranking it in total with 3 numbers: total pro-Chinese total pro-Western and total Neutral, the focus is more on the overall CBC bias and less on specific headline rankings.
Using Total Numbers instead of Headline by Headline:
A Krippendorf's alpha of .808/.829 is a highly reliable result. Krippendorff suggests: [I]t is customary to require .800. Where tentative conclusions are still acceptable, .667 is the lowest conceivable limit (2004, p. 241).
Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Thousand Oaks, California: Sage.
The general rule of thumb for percent agreement is presented in Neuendorf: Coefficients of .90 or greater are nearly always acceptable, .80 or greater is acceptable in most situations, and .70 may be appropriate in some exploratory studies for some indices" (Neuendorf 2002, p. 145). For social science studies in the communication field, the goal is often .80 or 80% pairwise agreement. In a separate article Lombard, Snyder-Duch, and Bracken suggest a higher threshold of .90 (90%) for percent agreement because of the weaknesses described below (2002, p. 596)
Pairwise agreement & kappa ratings change dramatically when we further simplify the ratings by simplifying categories to Pro-Western / other, Pro-Chinese / other.
Because every coder analyzed every single headline during the time period, the sample size is equal to the population and is fully representative.
Stronger coder reliability ratings can also be had by removing several more rogue coders, although actually reducing the average pairwise percent agreement as compared to the 8 coder sample used in this thesis.
The kappa coder reliability statistics are surprisingly low, given the large pairwise agreement across coders. As shown in the appendix, however, this still demonstrates agreement and doesn't disprove the thesis. Inter-coder reliability numbers are lower than specified in content analysis guidebooks.
When breaking down by Pro-Western / other OR Pro-Chinese / other reliability increases. The Fleiss Kappa for Pro-Western / other is .2901, falling under fair agreement and out of the realm of chance. See Appendix 7.1 for more. Finally, the coders and results all skewed the same way (Pro-Western).
Unfortunately, a strict interpretation of reliability requirements means that the thesis is still open to debate and further exploration.
Some confusion arose over headlines that contained a strong statement from China condemning Canada or Western nations. Several coders rated these headlines as pro-Chinese but my interpretation (and as specified in the coding) is that these statements induce fear and us-vs-them thinking, evoking yellow peril imagery. Two of the ten coders had to be removed from the data set, completely out of pairwise agreement with the remaining 8 coders.
Unfortunately, this thesis is somewhat tarnished by relatively low agreement ratings among the coders. This thesis didn't use any pilot or training and struggled with lower reliability ratings because of it. The number of questions (160) was reported to be so long by several coders and could definitely have induced some form of coder fatigue.
Removing rogue coders is standard practice after a pilot reliability study, not after gathering a full dataset. They were removed because they were the least correlated coders and because they seemed to misunderstand the rating instructions.
The previous iteration of this thesis, a big data analysis, had already conducted an analysis of the experts cited in CBC headlines. The second manual analysis section of the expert analysis could be fleshed out or proven statistically. Currently, it hasn't been statistically proven. Although not statistically tested, this manual article analysis should be provable by later researchers.
Meng Wanzhou Arrested. (2019, Dec 19). Retrieved from https://studymoose.com/meng-wanzhou-arrested-essay
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.
get help with your assignment