Off the Grid
Jake Marples - Capstone Portfolio
Each of these universes represents a different climate for media discourse. Therefore, even though Show A and Show B might provide the most informative substance, diversification allows you to realize big-picture trends. These trends can lead to further investigation and further knowledge. You start to notice the relationships between issues, which can help put things in context. For example, suppose you realize that you exist in Universe A. Your first question will might be: why doesn’t anyone support both proposals? Is this a result of conflicting core beliefs? Even in a political climate that could be characterized as polarized partisanship, understanding the distribution and source of moderate or radical opinions helps you understand the discourse on certain issues. In other words, being ‘informed’ on current events involves more than just consuming ‘reliable’ content--it also involves understanding how media as a whole has decided to portray those facts. And that’s not to say you need to invest 17 hours a day watching 20 different news program to stay ‘informed’, it just means that awareness of many sources creates knowledge that can’t be captured by consuming a few sources that you deem the most ‘informative’. In other words, suppose you listen to a perfectly unbiased, 5-hour NPR report on the 2015 Ferguson riots. Even if the coverage outlines every granular fact on a timeline while fairly providing socio-economic context, this still doesn’t change the fact that you won’t be able to fully understand the media discourse on the issue without other sources, regardless of their reputability.
NNNE Benefit 2 - Knowledge from Differences in Coverage
My initial plan for interpreting the results of my project could be represented by the following chart:
If my study happened to conclude that CNN and Last Week Tonight outranked the other sources in substance and retention, then one might reasonably decide to consume those two sources for efficiency reasons. In this model, knowledge is transferred from an outlet to the viewer, and rational humans would decide to pick the sources that transfer this knowledge most efficiently. My analysis, aimed at measuring quantity of substantive content and respective retention, could then provide a system to rank this knowledge transfer, right? The problem with this train of thought stems from the fact that knowledge can be created by observing differences between coverage.
Viewers aren't sponges. Most viewers notice things. If you ever browsed the internet during the Trayvon Meartin shooting, you probably saw pictures like these:
Even when sources tell an identical story—with the same facts and opinions—the delivery can differ substancially. In this case, different pictures can hold strikingly different connotations. Therefore, I believe that observation of these differences can lead to better awareness of a current event, or even new knowledge entirely. These differences lead to questions like "Why didn't CNN spend more than 45 seconds discussing Bernie Sanders in the democratic debate coverage?" Similarly to the discourse example above, the knowledge we derive from watching the news doesn't entirely come from the news itself. It can also come from our observation of the news, and the differences between certain broadcasts. An amended chart would look like the following:
But even this chart is oversimplified, as it doesn't show every permutation of comparison (CNN vs. Last Week Tonight for example). The key takeaway is that you don't consume every item in a news diet in a vacuum. Therefore, an experiment that evaluates each show in a vacuum doesn't take into account for network effects. It's silly to say "these shows are more informative so watch them" when media discourse and the observation of differences haven't been taken into account. Effectively, these two framework errors—the Apples to Apples Implication and the Network News Network Effect—doomed my experiment from the start. It wasn't that I asked a good question and couldn't design a controlled experiment around it; the question itself wasn't relevant from the start.
However, even if I could magically control all of the confounding variables, and thereby precisely measure the retention rates of each show, this data would be misleading in what it implies. I've dubbed these 'framework errors', because they represent ways in which my retention strategy doesn't really accomplish its goal of tangibly comparing the value of different sources. The entire idea itself stemmed from my desire to determine whether or not I was wasting my time watching Jon Stewart and John Oliver instead of CNN. However, even if the results of my experiment seem to indicate that this is true or false, the following two false assumptions render it useless:
A. The 'Apples to Apples' Implication
Generally, when you compare two things using the a quantitative benchmark, it implies that the methodology is relevant for both objects of comparison in similar ways. For example, a football scout might compare the 40-yard dash time of two potential wide-receivers he’s looking to sign. However, he wouldn’t use the 40-yard dash time to compare a potential wide-receiver to an offensive linesman, because those two objects of comparison serve different purposes and require different skills. In the same way, The Daily Show is very different from a nightly news program in many ways. For example, The Daily Show has no obligation to report the facts of current news; instead, it tends to reflect on things that have happened with the assumption that viewers understand the facts. Due to this, The Daily Show in my opinion is most enjoyable if and only if you have watched network news recently. Therefore, what’s the point of trying to evaluate two shows apples-to-apples if the watching of one show is needed to improve the other show? It’s not like these two choices represent mutually exclusive choices, which I think Julia Fox's study seems to imply.
Suppose I controlled all the necesary variables, and discovered that The Daily Show features a higher concentration of substantive content (# of facts) AND a higher retention ratio over most time periods for each case-study. Does this mean it's justifiable to watch The Daily Show instead of shows on CNN? As you might have predicted by the cynical nature of these rhetorical questions, the answer is no. If you compared Anderson Cooper 360 with another show that covered the exact same topics, in the exact same time period, in the exact same style, using the exact same form of delivery, then benchmarking facts and retention might be relevant. But with pretty much every conceivable variable differentiating Jon Stewart's coverage from Anderson Cooper's coverage, how can a retention benchmark be useful? A news show might boast a 90% retention ratio, but if every single story involves a cat getting stuck in a tree, an upcoming voter might find it useless compared to a show with 15% retention that only covers the election. However, if the prospective viewer is a member of the Washington Animal Rescue League, they might find the feline coverage more useful. Although the differences aren't this stark between satirical news and mainstream news, the fact remains that they exist in very different genres. Even within each genre, there exists significant differentiation [Oliver vs. Stewart]. However, the creation of a standardized benchmark to measure substance and retention implies that a higher score is better than a lower score, without exploring the possibility that each show's content is too different to even warrant a comparison.
B. The Network News Network Effect (NNNE)
The second framework error involves balancing a news diet. In business, the term ‘network effect’ refers to how the number of users in a system influences the value derived from each individual user. The telephone, for example, is a pretty useless invention if only one person owns one. Therefore, the value of a telephone significantly increases as the number of telephone-owners increases. As I reflect more on the role of media in society, I’ve realized that consumers of news experience a similar phenomenon. My analysis suggests that for a single consumer, the value of each news program is influenced by the total number of news programs watched. In other words, my initial attempt to measure the informative power of different shows failed to take into account that the value of each show isn’t static.The benefit from consuming more sources stems from two positive effects:
NNNE Benefit 1 - Knowledge of Media Disourse Distribution
To avoid confusion from smushing two ambiguously broad terms together, I’ll define media discourse as “the manner in which a certain issue is discussed across media outlets.” In my non-Ph.D. in Communications opinion, being ‘informed’ involves more than just knowing the ins and outs of an issue; it involves awareness of how issues are presented to the public. To illustrate this point, imagine a magical universe in which someone accurately calculated power rankings for both (1) the informative power and (2) the lasting power of various news programs. Based on these metrics, you decide to split your time consuming two shows, ‘Show A’ and ‘Show B’. Furthermore, assume that the government is currently debating two huge legislative proposals, ‘Proposal 1’ and ‘Proposal 2’. After watching both shows’ coverage of the proposals, you discover a divergence of opinion, illustrated by this chart:
At this point, assuming the power rankings are accurate, you have heard the two most informative sources outline pros and cons of both issues. This seems like an amazing practical application of measuring the ‘informative power’ of media. For issues with two opposing sides, you could just find the most valuable provider of each viewpoint. Want to be informed on a issue like gun-control? Simply use quantitative rankings to find the “best” pro-liberalization and anti-liberalization source, and consume content from both. Here’s the problem though: by eliminating less ‘informative’ sources, this approach prevents you from understanding how opinions are distributed. By limiting your news intake to a few ‘valuable’ sources, you become ignorant of big-picture trends in regards to how issues are presented to the public. In the prior example, by watching only Show A and Show B, it might be hard to figure out whether you exist in:



Chapter Six: Theoretical Uselessness



OR
OR

