10/24/2017 – Weapons of Math Destruction (part 1)

This week we begin reading and discussing Cathy O’Neil’s book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, chapters 1 through 6. Students in the class: please post your blog comment on this entry by Tuesday, October 24, at 6pm.

Advertisements

19 Comments

Filed under Class sessions

19 responses to “10/24/2017 – Weapons of Math Destruction (part 1)

  1. “When Mathematica […] teachers as failures, the district fires them. But how does it ever learn if it was right? It doesn’t. The system itself has determined that they were failures[].”

    This quotation from the introduction encapsulates an early theme of WMD: that many Big Data models have little way to test – or reinforce – success versus failure. In an educational context, that argument seems weak. If teachers tagged as ‘high performers’ by the algorithm do not get their students into university – or succeed on some factor harder to cheat than a standardised test – more than ‘lower performers’, the algorithm can be rejected with ease.

    The argument has more potency in Chapter 1 on LSI-R: racial minorities might be selected for longer prison sentences because they are more likely to reoffend, but this very selection means they lose their job and hence *do* reoffend. However, even then, presumably a model with better inputs (ex. partial second derivatives – how reoffence changes, as both race and sentence length increase) would be capable of being ‘fairer’ than an arbitrary judge.

    In general, this ties into a greater criticism I have of the book: that, while its criticisms are true of some mathematical models, some of the time, there is no structural critique of the principle of machine learning or data analysis. Instead, there are arguments about those models using incorrect inputs (mortgage default variance, standardised test accuracy, etc.).

    Yet first, this argument is non-comparative with a world in which random, biased humans make decisions – a racist judge; a Wall Street trader guessing which mortgages he likes. Second, it ignores that competitive markets solve for inaccurate models: another company can tender a contract for a better teacher selection algorithm, or make a fortune betting against subprime MBSs.

  2. tamara2205

    While O’Neil states that “the privileged are processed more by people” and “the masses by the machines” (8), in Chapter 5 we learn that even the most privileged applicants too are often weeded out by algorithms. However, detecting what HR doesn’t want is much easier than finding what they do, such as candidates with “intelligence, inventiveness and drive” (119). O’Neill mentions companies like Gild who try to address this issue by developing proxies for originality and social skills, including measuring a candidate’s’ “social capital” and the “skill and social importance of their contacts” (120). However, the quantifiable signals that correlate with the sought after je-ne-sais-quoi are rarely so explicitly signaled by the data available (120). After all the candidates “might be doing something else offline, which even the most sophisticated algorithm couldn’t infer” such as “attending a book group” (121). The secret ingredient for a star employee seems to be something difficult to detect given the observable data, in particular what the candidate consents to provide in a professional setting such LinkedIn. However, there exist mineable data sources that can provide insight into the more personal aspects of candidates: social networking sites. In the eyes of employers, algorithmically analyzing a candidate’s Facebook profile might not even be considered an invasion of privacy, given that they have made their profile public. While cheaper, faster and more comprehensive than interviews, however, these model are still WMDs – opaque, scalable and damaging. As much as it may benefit them, employers should not be entitled to know anything other than what the candidate chooses to reveal to them specifically, regardless of what they share on social media. A line must be drawn for algorithms as, beyond examining basic qualifications, their convenience soon begins to come at the cost of privacy and respect.

  3. In Chapter 5 entitled “Civilian Casualties,” O’Neil berates the stop and frisk system as “intrusive and unfair” (100). One salient point she touches on here is the relationship between emergent technology and the practice of stop and frisk. Here, O’Neil discusses the increased reach of surveillance through the deployment of security cameras (101). She then goes on to equate having our movements watched and analyzed to a “digital form of stop and frisk.”

    Though I agree with the idea of someone watching me around the clock is eerily reminiscent of Big Brother in Orwell’s 1984, I would not go as far as O’Neil to say that video surveillance should be termed a “digital stop and frisk.” There seems to be a physicality of stop and frisk that appears to be – for the better – lost in digital surveillance. The idea of being felt up by a stranger who already has a cynical disposition towards me makes me shudder. It is dehumanizing to be treated as a criminal, especially when you do not know why someone suspects you of wrongdoing.

    It gives me a bit of solace to think in digital surveillance if your face is found to be unconnected to criminal instincts, you will never have the feeling of being wrongly suspected. In a world of only digital surveillance, not physical, you can go on with your day without having an antagonistic face-to-face encounter with a stranger. O’Neil even notes that in a world of thousands of security cameras that send out our images for analysis “police won’t have to discriminate as much” (101). This is a clear benefit of digital surveillance over the traditional stop and frisk that make it far superior to and independent from its predecessor.

  4. Throughout Weapons of Math Destruction, O’Neil makes repeated reference to the power of statistical models to affect positive feedback loops, where the preponderance of some undesirable social phenomena is exasperated by models that are supposed to make our lives easier. In particular, O’Neil mentions that the desire for efficiency, mentioned as a primary driving force on page 116, at all cost often is at the root of these negative feedback cycles, as accurate representation of the social systems such models are attempting to represent are sacrificed in favor of expediency at best and malicious distortion at worst.

    I personally found Ch. 5’s exposition of crime tracking software and stop and frisk policy to best demonstrate how disturbing some of these feedback loops can be, given the good intentions of technology like PredPol. Concerning such crime tracking software, O’Neil mentions that the focus on crime associated with lower income areas versus white collar crime effectively criminalizes poverty (91).

    I imagine that crime tracking software won’t disappear, and perhaps it shouldn’t, given the potential that such software has to build better communities with the right kind of ethical decision making. Wouldn’t it be interesting if these tracking technologies were smarter about the impact of crimes they were detecting, by ranking crimes in terms of their social impact at a finer granulation than an almost laughably named Part 1 and Part 2? For instance, a police force could be motivated to map out areas where truly violent crime was occurring, rather than blindly mapping out all regions with the same kind of risk assessment. Furthermore, the addition of data points could be amended such that crimes with greater social impact would be given a heavier weight. After all, many more people are affected by a predatory loan company’s fraud than a single instance of carjacking.

  5. Alex Gill

    O’Neil frequently employs logical fallacies in her arguments, rendering them unlikely to convince hostile readers. I’ll explore two of several examples found thus far.
     
    O’Neil claims that new hires in finance and Big Data are “ravenous for success and have been focused on external metrics – like SAT scores and college admissions – for their entire lives” and that this reliance on external metrics translates into measuring their productivity by dollars earned and disregarding morality or social impact in the process (47). O’Neil’s characterization of these hires is outright stereotyping – taking an SAT test and going to college hardly indicates a life focused entirely on external metrics. More importantly, O’Neil employs an ad hominem argument that attacks workers’ characters as proxy for evidence of their contribution in discriminatory models.

    Later, O’Neil employs the “slippery slope” fallacy when considering potential college rankings inputs. She argues that if graduation rates are considered as input, then schools will just reduce math and science prerequisites, and this will in turn reduce the number of STEM professionals (66). However, it’s uncertain if schools would actually retaliate as such and unclear how reducing STEM prerequisites would discourage scientists and technologists. Without good evidence of either consequence, O’Neil poses a weak objection. Additionally, she mentions that if postgraduate income is used as input, then colleges would retaliate by shrinking their liberal arts programs in favor of engineering and science programs. Therefore, it seems to me that if rankings combined postgraduate income inputs (which perhaps favor STEM) with graduation rate inputs (which work against STEM), then the two unintended consequences could cancel each other out. Thus, O’Neil’s argument that schools can always “game” the system is fairly unconvincing.

    Although I believe big data causes much harm in our society, O’Neil’s frequently fallacious logic fails to demonstrate as such.

  6. In Chapter 5 “Civilian Casualties”, O’Neil outlines the pitfalls of WMDs in the context of law enforcement with algorithms like PredPol for crime prediction. In this example, PredPol system fails because it focuses on the wrong inputs “type and location of each crime and when it occurred” (p86) and manipulates them to create a “pernicious feedback loop. The policing itself spawns new data, which justifies more policing” (p87). O’Neil makes good points regarding the algorithm’s flaws as it essentially victimizes the poor. However, the argument seems too simplistic.

    Since law enforcement resources are limited, without PredPol, would the distribution of resources be fairer? Is it better to randomly target different geographical areas or leave it to individual police units to decide where to patrol? Much of the argument against PredPol is based on the failings of systemic biases and value systems embedded in big data algorithms. However, every decision we make is inherently full of biases and value systems so we would be trading a system for another. At least, big data and machine learning systems force us to model an issue with certain biases and flaws that we can point to, argue about, and update at scale. In this case, transparency is the variable required to allow improvement so that big data algorithms’ existence generates more utility than their absence.

    Also, having grown up in a not-so-safe neighborhood, I would definitely prefer more police patrolling than less. It does increase the likelihood of getting stopped by the police for no valid reason but it is at the expense of a safer neighborhood.

  7. Meg Verity Elli Saunders

    Having outlined the characteristics of WMDs as “Opacity, Scale, and Damage”(31), O’Neill explores these ideas by explaining the issues of WMDs through the example of U.S college ranking systems. She describes how, in the U.S. News model, college rankings are determined by a complex algorithm. However, since the effectiveness of colleges is difficult to measure, the algorithm relied on various alternative and more measurable pieces of data or “proxies.” All these proxies were then combined and weighted in the algorithm to output one definitive rank. One of O’Neill’s issues with this is that “Now the vast reputational ecosystem was overshadowed by a single column of numbers.”(53) What she means by this is that the algorithm took all the different attributes of the schools and imposed its (or the maker of the algorithm’s) own assertion as to what each of them was worth, creating a single flawed scale by which to judge schools against each other. The government version of simply “releas[ing] loads of data on the website” so that “students can ask hero own questions about the things that matter to them,” O’Neill argues, is a “transparent, controlled by the user, and personal” alternative, “ the opposite of a WMD.” The key difference between these two systems is opacity: the former hides all the original data inputs, revealing only the single piece of data outputted by the algorithm, whilst the latter shows all the data, so that the prospective student can, in a sense, employ their own unique algorithm to decide which school is best for them. This highlights, in my opinion, an issue faced by all statistical algorithms that are created to describe complex real-life situations: it is arguably pointless to attempt to simplify any institution or ecosystem into a single ranking or number.

  8. Caroline Cin-kay Ho

    According to O’Neil, one of the key elements of Weapons of Math Destruction is opacity, whether of a model’s results or method (31). In “Arms Race,” she applauds the Education Department’s release of data on colleges for being “transparent, controlled by the user, and personal… the opposite of a WMD” (67). Instead of relying on an opaque ranking system, she argues, “students [could] ask their own questions about the things that matter to them” (67). While I agree that this data release is a step in the right direction toward empowering the public, I find O’Neil’s apparent praise of the fact that students “[didn’t] need to know anything about statistics” (67) somewhat discomfiting. Certainly, in this scenario, students might not need to know much about statistics – they might simply search for schools with tuition below a certain threshold, for example. However, should students wish to use the data to find answers to questions not easily answered by search, such a lack of knowledge might cause great harm and make results effectively opaque. For example, imagine that a student wanted to know whether to apply to public or private colleges and decided to determine this by averaging the mean incomes of graduates from all public and all private schools respectively. This student might then conclude that since on average, private school graduates have higher incomes, all private colleges are better than public colleges, and might then happily send off applications to for-profit colleges while forgoing applying to the UCs, ignorant to the flaws in her methodology and conclusion. Ultimately, in order to truly end the power of WMDs, releasing data or providing powerful analytical tools isn’t enough – members of the public must have some basic education in statistics to know how to use the data.

  9. rhwang2

    In the first segment of the book, O’Neil attempts to illuminate the shortcomings of what she calls “Weapons of Math Destruction,” large-scale opaque models that do significant societal harm. For example, O’Neil provides the example of rating agencies, whose incentives encourage them to systematically provide overly optimistic ratings at the expense of debt investors. The author writes, ““The risk ratings on the securities were designed to be opaque and mathematically intimidating, in part so that buyers wouldn’t perceive the true level of risk associated with the contracts they owned. (chapter 2).” O’Neil also points to the college application process and the use of standardized tests for evaluating teachers as examples of WMD’s. In all of these examples, O’Neil argues that the model is “overly simple, sacrificing accuracy and insight for efficiency (chapter 1).”
    For all of these WMD’s, though, it is because the models need to be applied so vastly that they can be crude in their efficacy. Thus, I wonder whether there is truly a viable alternative to using these WMD’s in some capacity. Although filtering job candidates can be unreasonable in specific cases, can one really expect a cost conscience firm to pay for the extra labor to filter through thousands of more applicants (occasionally with lower hit rates)? Even with costs out of mind, can we expect people with differing subjective standards to, for example, evaluate and compare the hundreds of teachers in a school district? Such a direction places significant trust in the judgment of people instead of the crude objectivity of an algorithm.

  10. Divya Siddarth

    Coming off of reading the Attention Merchants, what immediately interested me about Weapons of Math Destruction was O’Neil’s discussion of and examples regarding ‘WMDs’ that function as feedback loops. In class, we discussed whether or not Wu’s characterization of advertising blamelessly acted on the prevailing wisdom of the time (for example, targeting to women with diet or marriage-based stories), or helped create this wisdom by propagating stereotypes. I think in the case of advertising, and in the case of the other models discussed by O’Neil, it is certainly more of the latter situation, and I appreciated how O’Neil expounded on how these models can “contribute to toxic cycle[s] and help sustain [them]” (27). Evidence for this comes from every avenue, ranging from the US News and World Report college rankings, which created “vicious” and “self-reinforcing” (53) cycles of suffering reputation and deteriorating conditions that then caused the ranking to fall further, to “predatory ads” (72), which decide that “vulnerability is worth gold” (72) and target the pain points of the poor to sell them unrealistic dreams that keep them in poverty. Even in the criminal justice system, where fairness is paramount, we see how predictive models can create “pernicious feedback loops” (86), where policing in impoverished neighborhoods leads to arrests and stops for actions that aren’t noticed in wealthier neighborhoods, which then spawns new data to drive police to those neighborhoods, and the cycle continues. I appreciate that O’Neil acknowledges how intrinsic these loops are to the way data can be used in these circumstances, even by those who think they are doing good. We also see that, in some cases, these toxic cycles can be mitigated – like when Xerox removed commute variables from its resume filtering model, to make sure that impoverished communities weren’t being unduly targeted, and “sacrificing a bit of efficiency for fairness” (119). I think is the direction in which we must go.

  11. Bradley Richard Knox

    After reading the first 6 chapters of Weapons of Math Destruction, I think that O’Neil struggles to find a concrete way to show that her perspectives on the negative impacts of WMD’s are actually as bad as she seems to think that they are. I agree with some of her points, such as the idea that feedback loops in crime prediction software such as PredPol often impact poorer and often minority communities in a negative way. However, even though I agree with many of her points, I don’t necessarily think that the software is the source of the problem, and simple tweaks or adjustments such as weighting crimes differently when inputting them into the system could be a fast and easy fix. In other words, a little bit of common sense could go a long way in overcoming this supposed WMD.
    I also think that she dramatically oversimplifies the negative impacts of WMD’s across many domains, choosing very specific examples and neglecting most of the positive impacts that these new technologies might bring. For example, it’s sad that Sarah Wysocki lost her job, but O’Neil never really mentions whether the value-added model worked at doing its job in increasing academic performance, which was its original goal. Other examples include the Saudi university that gamed the system to move up in the college rankings. However, in my opinion these rankings also successfully push universities to improve and compete with one another, and hold them accountable for their shortcomings. It’s not that I disagree with her point that certain people get treated unfairly by these mathematical models, its just that I feel like she has fallen in love with telling her side of the story via a specific instance, and has expanded that instance to represent the system as a whole.

  12. sm201

    In the first six chapters, Cathy O’Neill begins by narrating the pervasiveness of mathematical models in society. Analyzing deeper, O’Neill identifies three key characteristics of “Weapons of Math Destruction” – Opacity, Scale, and Damage. O’Neill then illustrates these three characteristics through macro examples – e.g. the 2008 market crash, and for-profit college scandals. Notably, in the latter example, O’Neill remarks “The for-profit colleges do not bother targeting rich students. They and their parents know too much.” (79)

    Effectively, O’Neill’s argument implies that there are 2 population subsets: those who understand the system and are somehow immune, and those who don’t understand the system and are vulnerable. However, this argument appears underdeveloped for two key reasons. First, there is no clear criteria for what level of “understanding the system” is needed to avoid this deleterious impacts. Second, even those who “understand” the WMDs may not necessarily be able to resist a WMD’s effect (e.g. a hedge fund manager who understands that the housing market is going to crash, may not be able to find a buyer in time) – which makes opacity feel ultimately irrelevant.

    Towards the first, O’Neill may argue that it is difficult to set criteria for “understanding” the model, given the opacity of the model itself. Towards the second, O’Neill may claim that the strength of correlation between “understanding” and resisting harm is largely case-dependent; the intelligent hedge-fund manager still being affected by a housing market collapse is clearly distinct from an intelligent parent successfully avoiding for-profit colleges.

    These arguments are valid, but we can still make progress. For example, we can acknowledge case-dependent distinctions in resisting WMD effects; and attempt to clearly correlate level of “understanding” (by societal rank: e.g. WMD-containing-industry CEOs vs. clerks) with immunity. Taking these steps will likely help to fully develop O’Neill’s argument.

  13. Does “Disparate Impact” Cover Algorithmic Discrimination by Proxy Data?

    O’Neil’s most salient point in this week’s section is that the quality of an algorithm’s predictions depends heavily on the quality and relevance of the input data. In other words, “garbage in, garbage out.” O’Neil writes, “WMDs routinely lack data for the behaviors they’re most interest in. So they substitute stand-in data, or proxies” (17). The use of proxy data points in high stakes situations, such as qualifying for a job or getting a loan, can have far reaching and discriminatory consequences. The clearest cut case study O’Neal presents is that of Xerox (Ch 6), which used a third party software to find job applicants who lived close to work (if employees don’t have to commute far, they are more likely to stay in their jobs, which is better for the company’s bottom line). This excluded job applicants in distant zip codes, who were also more likely to be of a lower SES (who, in theory, would be further disadvantaged by being denied the job, and then might live further from economic centers, which would prevent them from getting a job – a classic cycle of impoverishment). This effect sounded all sorts of “illegal” alarm bells to me: under Title XII of the Civil Rights Act of 1964 employers cannot discriminate against number of protected classes, and can also not use tools in the application process that have a “disparate impact” on any of those classes. O’Neal mentions the 1971 Supreme Court case Griggs v Duke Power Company (108) that established this principle of “disparate impact.” My question for further research is: how would prospective employees prove they’ve been discriminated against by proxy data? Either way, waiting for a class-action seems quite whack-a-mole approach. What proactive steps might the EEOC take to account for this datafied version of classic discrimination?

  14. In “Bomb Parts”, Cathy O’Neil discusses the applications of WMDs in penal systems. A nihilist towards progress in penal reform, O’Neil states her belief that the Koch Brothers’ and the Center for American Progress’ “Bipartisan effort to reform prisons, along with legions of others, is almost certain to lead to the efficiency and perceived fairness of a data fed solution.” (30)

    The effort, coined the “Coalition of Public Safety”, seems to challenge O’Neil’s hypothesis, at least to the public eye. Instead of being a WMD-pushing body, the CPS seems to be similar to the “loads of data on a website” solution that the Obama administration implemented for post-secondary education (67). On CPS’ website, readers are informed about the 6 states that the CPS is involved in reforming, and descriptions show a lack of reference to black-box solutions, even under the key issue of “Re-Entry”. This similar emergence of accessible data in justice reform has grown through projects such as “The Sentencing Project” and “The Next to Die”, which hail information accessibility as a change agent. Recently, even recidivism models such as COMPAS have had assessment reports disseminated online by organizations such as the “Electronic Privacy Information Center”.

    Does this represent a shift from reform-by-WMD to reform-by-information? Perhaps, based on information transparency prevalence. However, opaque recidivism algorithms are still applied in sentencing/parole hearings in 48 states, with little reform (“Algorithms”). SCOTUS itself has lacked to yet support risk-assessment algorithm transparency; Loomis v. Wisconsin’s (2017) Petition for Certiorari, filed “because the proprietary nature of COMPAS prevents a defendant from challenging the accuracy and scientific validity of the risk assessment,” was denied June 2017. Evidently, as O’Neil insinuates, these projects seem little more than guises for real system reform, suggesting that WMDs are institutionally here to stay.

    “Loomis v. Wisconsin (2017)”. SCOTUSblog. N.p. 2017. Web. 22 October 2017.

    “Algorithms in the Criminal Justice System.” Electronic Privacy Information Center. EPIC. 2017. Web. 22 October 2017.

  15. In the first few chapters of O’Neil’s narrative, she portrays the vulnerabilities, fallacies, bias, and negative impact that widespread mathematical models can have. Furthermore, she also focuses on the key point that while many people benefit from these models, the problems embedded in these models lead to many people suffering from their consequences (31). While she describes this suffering and explains how it is hurting the population, she doesn’t dive deeply into the nature of this ethical concern. It seems as though each example conveys a wrongdoing in the system that is difficult to argue against.

    This brings up the following ethical questions: At what benefit to suffering ratio is a model acceptable? And more generally, what is fair? These questions are difficult to answer as they are largely opinionated and it is difficult to accept the allowance of some suffering, although it seems utopian to deny that some amount of suffering is inevitable.

    O’Neil hints at various themes that may serve as suggestions to these answers. Maybe it is the means of why the suffering are suffering that is unfair. Maybe it is the level of transparency in the models and systems that allows this suffering. After all, no model is perfect and variables that are important to some degree will be left out (59). Maybe it is the predatory intent and misleading nature of these models, as used by private colleges and institutions such as Corinthian College (71). All of these factors contribute to the problem at hand, but as it stands, what is decided to be fair and possibly regulated will likely be judged on a case by case basis. A general ethical agreement regarding WMD’s by regulators and corporations seems far off from where we stand now.

  16. Julia Thompson

    O’Neil states throughout Weapons of Math Destruction that the problems behind WMDs are feedback loops and application on large scale. While both may cause an algorithm to be destructive, I would argue that the human aspect of these algorithms is the problem, not the computers. Even WMDs have potential to be used appropriately if humans are not abusing them.
    One example of humans causing the problems of WMDs is in Chapter 3. US News & World Report began ranking colleges in 1988, and opinion surveys sent to university presidents were the only criterion for ranking. However, this was not a blind computer inputting its own data. These were humans with opinions and biases who knew that their ranking would be legitimate if “Harvard, Stanford, Princeton, and Yale came out on top.” The human bias fed into the ranking through opinion survey and deliberate failure to consider other relevant data, like tuition and fees. If the humans were not part of this algorithm, it would likely provide a more helpful ranking for college applicants.
    Another example is police using their own policing choices to validate themselves. When they decide that nuisance crime is as important as violent crime, they can choose how to address the nuisance crime. While it can be found virtually everywhere, the fact that it is found where they choose to look gives them motivation to continue to look there.
    WMDs are not necessarily dangerous due to scale or the data they use. They often require the misuse of a human to be truly dangerous. For instance, it is quite easy to remove biased data, as Xerox did in their hiring algorithm (119). While O’Neil focuses strongly on the problem of WMDs, their users cause as many, or in some cases more, problems than the algorithms themselves.

  17. sandipsrinivas1

    After reading the first six chapters of Cathy O’Neil’s novel, one major theme that sticks out to me as one to follow, especially following the reading of “The Attention Merchants,” is the concept of personal agency. More specifically, I wonder how O’Neil views the responsibility of people in their trust on WMDs, and how this compares to, say, Wu’s view of how much onus people have in fighting against the trappings of the attention economy.
    The most telling quote of O’Neil’s comes when she says “It’s up to society whether to use that intelligence to reject and punish [people]–or to reach out to them with the resources they need” (118). This surprised me, as it led me to realize that O’Neil’s issue with WMDs is not their existence, but rather the ways in which they are leveraged. When she talks earlier in the chapter about Kyle Behm getting rejected from jobs on account of a questionnaire that may point to mental health issues, for example, her suggestion seems to be that this questionnaire should result in these companies increasing their resources for those with mental health problems rather than reject them outright.
    This is a very pragmatic view–one that acknowledges the prevalence of WMDs and the difficulties that would come with trying to eradicate them entirely. This reminds me of Wu’s view of attention merchants, as his epilogue urged for a reclamation of personal agency. His insistence that people be aware of the attention economy and its trappings rather than try to get rid of all malicious attention merchants altogether.
    As a whole, both author’s views place a large emphasis on individual efforts in the face of inevitable systems. This view will be interesting to track as O’Neil said she would “[return] to in future chapters” (118).

  18. After describing the US News model of college rankings as a WMD, O’Neil suggests an alternative. She describes how the US Department of Education released data about US colleges on a website, effectively generating “individual models for each person” by allowing each person to apply their own weights and assign their own priorities (67). The website O’Neil describes[1] still sorts colleges by metrics like “% earnings above HS grads,” “Salary after attending,” and “average annual cost,” creating a pseudo-ranking that depresses colleges which might otherwise represent the ideals of “learning, happiness, confidence . . .” (52). (Though I think these metrics are in-line with the concerns of most parents and college-going students nowadays.)

    But looking at two government websites for data presentation, one focused on raw data on each college[2], one with a built-in search experience[3], I agree that these systems are more “transparent, controlled by the user, and personal” (67). While I believe there will still be a lucrative market for “identifying the best schools for a better future,” packaging this information in an accessible, no-cost package is a good step. I’m only concerned that these “individual models” won’t end up very far from the hyper-competitive model US News perpetrates, focused on “success metrics” like “earnings after college” and softer factors like “name recognition” and “network value.” And this would do nothing to address the inequality maintained by this system: the privileged gather around high-profile schools where other privileged students go, and the price is driven above what most people can afford.

    I think O’Niel holds back from promoting something that would really be an anti-WMD: breaking trust in higher education as a path to financial security, instilling a sense of disillusionment about what college can provide. After all, if the metrics that drive both US News and College Scorecard relate to financial return, emphasizing things like student loan debt and mediocre job-seeking performance after college would strike directly at students’ deepest concerns. If you really wanted an anti-WMD, you would need to make college seem wasteful – a marginal return that just isn’t worth the extra investment, and a foolhardy decision that actually sets people back in a competitive market for “success.” You would need to reduce higher education to being only a haven for the most fanatic of knowledge seekers, not a symbol of achievement or a tool for self improvement.

    And I can start to see why one wouldn’t want to promote that. How does an academic institution support itself when it becomes a geek club for those who can afford to waste time on it? What happens when people turn away from all college has to offer in favor of coding bootcamps and traditional certification programs?

    [1] https://collegescorecard.ed.gov/search/?degree=b&major=engineering&sort=advantage:desc
    [2] https://nces.ed.gov/collegenavigator/?q=stanford+university&s=all&id=243744
    [3] https://collegescorecard.ed.gov/school/?243744-Stanford-University

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s