There has been considerable push-back against “doing your own research” recently, and I’m not entirely happy with that. I’m aware, of course, that the phrase “do your own research” tends to be used by and/or associated with rather delusional people who believe that watching a Youtube video or googling something counts as “research”, but pushing back too hard (or in the wrong way) against such silliness risks ending up with the other extreme: elitism and counterproductive conformism.
There are (at least) two aspects of “doing your own research” and the push-back that are worth paying closer attention to. One has to do with the apparent ambiguity of the concept of “research”; the other has to do with expertise and expert authority. Let’s start with the first.
Many people seem to think that “theory” means something like a guess or a hunch, while the word really refers to a scientific hypothesis that has been well supported by evidence (and that fits in a broader scientific worldview). Similarly, many people seem to think that “research” refers to googling something or watching a Youtube video. Both of these mistakes have much to do with a widespread ignorance about science. This ignorance is partially due to a failure of education, but also due to the continuous reinforcement of ignorance by equally ignorant (and/or malicious) mass media, “influencers”, and politicians.
The ignorance itself isn’t new, of course. Throughout much of the 20th century only a fairly small percentage of people had a clue about what science is, how it works (and doesn’t work), and so forth. The masses are and have always been un(der)educated and largely ignorant. What has changed, however, is that the masses have been given a voice. While the ignorant masses used to be effectively silenced and marginalized by their lack of access to the means of public self-expression, the internet and social media have given everyone a podium.
Probably, this democratization of expression is a good thing, but it has some important implications that don’t seem to be recognized sufficiently. If the ignorant masses are given a voice, they should be made less ignorant first. The alternative would be much like just handing out instruments to random people and creating an orchestra like that – all you’d get is cacophony, noise. That’s what the internet (and social media, especially) is like: the cacophony/noise of a collection of people banging/plucking/blowing at instruments they have just been handed but don’t really understand.
Education has not really adapted to this new situation. Sure, there are “critical thinking” and “media literacy” courses that are supposed to address this kind of issues, but not everyone takes such courses. (They should probably start in primary school and continue throughout high school, rather than be limited to college/university undergraduates.) And furthermore, these courses typically try to avoid anything that comes too close to real critique. One wouldn’t want to instill a real critical spirit in students – they might become critics of the status quo!
Part of the reason why education fails in this respect has to do with this latter point, of course. The socio-political and economic elite has no interest in promoting a critical spirit. The masses are much easier to manipulate and control if they are mostly ignorant. Educated citizens are dangerous – they might question sacred givens such as religious beliefs and the pseudo-scientific foundations of the socio-economic systems most of us are forced into. Furthermore, the extreme right – which is getting ever more powerful – has always reveled in anti-intellectualism. In Adolf Hitler’s view, “a person with little scientific education, but physically healthy and filled with determination and willpower, is much more valuable for the nation than a brilliant weakling”,1 and this remains the default view on the right of the political spectrum.
So what then actually is research? As is often the case with apparently simple questions, there is no simple answer. The Frascati Manual 2015, which sets the standards for defining and measuring research and development and related activities defines research as “creative and systematic work undertaken in order to increase the stock of knowledge – including knowledge of humankind, culture and society – and to devise new applications of available knowledge”.2 This definition gives us some important keywords: creative, systematic, increasing the stock of knowledge. The third of these keywords might set the bar too high, however. Indeed, in the context of what the Frascati Manual is about, research (or R&D) is supposed to increase mankind’s (scientific, technological, and other) knowledge, but this is not the case for everything we’d justifiably call research. When an undergraduate student has to do “research” to write a “research paper”, then what she is doing – provided that she is doing it well – does count as “research”, even though it is unlikely to add anything new too “the stock of knowledge”.
Research is a systematic and thorough investigation into some subject matter with the aim of gaining new knowledge, if not for mankind as a whole, then at least for oneself. (Consequently, there is no sharp boundary between studying something and researching something.) Research requires familiarity with the key concepts, theories, and methodologies that are relevant to whatever the research is focused on. However, it doesn’t necessarily involve doing experiments or gathering data. Such research would typically be called “empirical research” (or something like that), but there also is theoretical research, which is based mostly on reviewing and reflecting on what has been published before. (The empirical/theoretical research categorization is not exhaustive, by the way, and neither are the two categories mutually exclusive.)
Given that few of us have laboratories or other equipment for experimental research, “doing your own research” typically means doing theoretical research – or that is what it should mean at least. (“Typically”, because survey research and computer simulations can now be done by anyone with a computer.) As such, “doing your own research” means thoroughly familiarizing yourself with the scientific literature, terminology, methodologies, and so forth that matter for your research topic. “Doing your own research” on epidemiology requires understanding what a SEIR model is and how it works, for example, as well as the ability to read and understand new scientific papers published in the field.
Again, it must be emphasized that the bar shouldn’t be put too high. Undergraduates do research for their “research papers”, and therefore, one doesn’t necessarily need a PhD in a certain field to do research in that field. The bar shouldn’t be put too low either, however. Undergraduate research papers tend to be quite bad and if one aims to actually learn something from one’s research (let alone learn something that is sufficiently valuable to share with others, or that could even contribute to the stock of human knowledge) then one would have to do a lot better than the average undergraduate college/university student (and probably even better than the average graduate student).
One doesn’t have to be an expert in the field of one’s research, however, although it is possible that by doing a lot of research, one becomes one. Some experts seem to dislike this and protect their expert status like a fortress against any kind of non-expert intrusion. This is understandable. If you spend a decade or (usually) more to become an expert in some particular field, you’re unlikely to take a novice seriously. That novice might even be a threat if he becomes too influential and thereby undermines your work and the work of your colleagues. But most of all, it is just incredibly tiresome to try and deal with people who don’t even understand the most basic facts of your research field, and unfortunately – because too many people who claim that they are doing “their own research” confuse research with googling something and/or watching a Youtube video – this is probably more often the case than not.
There’s another side of the coin, however. If someone from outside a certain scientific field has really done their own research (in the sense explained above), then it is not impossible that that person can make a real contribution to that scientific field. Sometimes scientific innovations and breakthroughs result from something like this: an outsider making herself acquainted with a certain research field and offering a new perspective. Of course, this is rather unlikely for amateur “researchers” venturing into new (to them) fields, but I don’t think this is a common motivation for “doing your own research” either. Rather, such “research” typically seems to be motivated by a distrust of experts and/or some (socially) accepted view.
Often such distrust of expertise is unwarranted. Often it is rooted in a kind of populist anti-intellectualism that is quite common in some communities and that is closely related to the anti-intellectualism of Hitler and other fascists mentioned above. And often it seems to be a consequence of the Dunning-Kruger effect (i.e. vastly overestimating one’s own knowledge about a topic because one knows so little about it that one doesn’t even understand how little one knows). But sometimes distrust of expertise is warranted – for example, when expertise is claimed, but really lacking; or when a scientific field becomes entirely self-referential and loses all contact with reality. The latter is arguably the case for mainstream, neo-classical economics but the people who are exposing and fighting that tend to be scientists/experts themselves (and most certainly not amateur researchers doing “their own research”).
An interesting example of someone claiming expertise where they have none was discussed before on this blog. I’m thinking of the case of Michael Mann criticizing a report by David Spratt and Ian Dunlop on (inter-) national security aspects of climate change.3 Mann claimed expertise because he considered the paper to be in his field, climate change, and was extremely critical – he considered it an example of “doomism”. However, the paper is really in the field of (inter-) national security and treats climate change as an independent variable – that is, they don’t attempt to explain or predict climate change, but take climate change as a given and attempt to explain or predict the (inter-) national security effects thereof. Mann is not an expert on (inter-) national security, and even with the little bit I know about that field, it was quite obvious that he was way out of line. Hence, he was abusing his status as a climate change expert to claim expertise in a field that he knows little about, merely because it had something to do with climate change. Unfortunately, this problem is not restricted to just this example. It appears to be quite widespread in climate research, in fact.
To predict climate change, there are two things you need to know: (1) how much carbon we are going to emit, and (2) what the effects of those emissions are going to be. The second is climate science, and climate scientists have a pretty good understanding of that side of things. The first is not climate science – it concerns what humans do (rather than what nature does) and is, thus, social science. Unfortunately, climate scientists are pretty much ignorant with regards to (1), which makes sense, as they are natural scientists and not social scientists. They aren’t any more ignorant in this respect than the rest of us, however, because even social scientists don’t have a clue about how much carbon we are likely to emit. The problem is that without a good estimate of emissions, predicting effects is a mere theoretical exercise. And without a good understanding of likely and unlikely emission scenarios and what affects them, one cannot possibly understand climate change.
Of course, climate scientists are aware of this, but it seems that they collectively have fallen prey to something like the Dunning-Kruger effect. Like Michael Mann believing himself to be an expert on (inter-) national security if the climate is somehow involved, climate scientists – at least collectively – appear to believe that they have a sufficient grasp of the relevant social science, when they really don’t. There are no plausible social-scientific models of carbon emissions. The models and scenarios used by climate scientists, like the Shared Socioeconomic Pathways (SSPs), are absurdly simplistic, have nonsensical economic foundations, and don’t allow any feedback between climate and socioeconomic factors. Because of this, climate scientists cannot predict climate change. They believe they can, because they believe they have plausible models of scenarios of carbon emissions, but in as far as any climate scientist genuinely believes that the SSPs and similar approaches are realistic, that belief can only be rooted in a version of the Dunning-Kruger effect – that is, having so little knowledge of the social science involved that they don’t even realize that they really don’t know anything about it at all.
The case of mainstream economics is probably the best illustration of why dissenting views – including those by people doing their own research – are so important. Alternative views have effectively been banished from economics departments in the past decades (to business schools, mainly, but also to other academic departments), and because of that, if you want economic advice or economic predictions, an economic department is now just about the worst place to go.4 The few economists that predicted the Great Recession of 2008, for example, were all “heterodox” (i.e. non-mainstream) economists that had been pushed out of economics departments a long time ago.5
Mainstream economics is an extreme case, but it reveals an important reason why pushing back against “doing your own research” too strongly is potentially dangerous. That push-back is effectively a call to uncritically submit to expert authority, and I’m not a fan of uncritical acceptance of authority (regardless of what kind of authority it is). A functioning democracy requires investigative, critical citizens and authorities that are open and receptive for critique. This doesn’t apply just to political authority, but to any kind of authority as one kind may be partially based on another. The push-back against “doing your own research” puts expert authority on a pedestal, but nothing deserves to be put on a pedestal. Expert authority can be wrong – devastatingly wrong even, as the case of mainstream economics so clearly shows.
The climate change example, on the other hand, illustrates what may be one of the best reasons to “do your own research”: scientists cannot answer your question. I want to know how hot it is going to get, but climate scientists cannot answer that question because they don’t know – and don’t even really try to predict – how much carbon we are going to emit. So if I want an answer to this question, the only way I’m going to find it is by doing my own research. (For the results thereof, see the Stages of the Anthropocene, Revisited series.)
The best reason for doing your own research, however, is just a desire to better understand something and/or to learn new things. That’s what motivated me to build an economic model of production, for example, and a model of the spread of Covid and the effects of various policies thereon. The conclusion of the latter example also contains the only passage where I recommended “doing your own research”.6 Until now, that is, because I’m going to repeat that recommendation: do your own research, but make sure that it really is research what you are doing.
If you found this article and/or other articles in this blog useful or valuable, please consider making a small financial contribution to support this blog, 𝐹=𝑚𝑎, and its author. You can find 𝐹=𝑚𝑎’s Patreon page here.
Notes
- Adolf Hitler (1925), Mein Kampf (München: Franz Eher Nachfolger), p. 452. My translation.
- OECD (2015), Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and Experimental Development, The Measurement of Scientific, Technological and Innovation Activities (Paris: OECD Publishing), p. 44.
- See Michael Mann versus the “Doomists”.
- Aside from banks and other financial institutions, perhaps, but those tend to overlap with economics departments.
- The economist or economic historian Ha-Joon Chang also noted that you don’t need economists to have sensible economic policies. See: hist (2010), 23 Things They Don’t Tell You About Capitalism, (London: Penguin).
- Actually, what I wrote there is “Don’t trust me – trust real experts. And do your own research.”