Cecil Abungu
4 min readMar 22, 2023

--

REWARDING KNOWLEDGE SYNTHESIS IN THE AGE OF LARGE LANGUAGE MODELS

Large language models are likely to have a profound impact on one of the ways we’ve always differentiated between who is deserving and who isn’t: How good they are at knowledge synthesis. Knowledge synthesis — which I take to be one’s ability to identify and then assess information and find patterns and insights — has always been a skill whose level of mastering we use to decide who is special and who isn’t in especially educational or hiring situations.

Before we continue, it’s worth noting that in many fields (especially the social sciences) knowledge synthesis of primary sources is not common. In such cases, students and professionals synthesize knowledge starting from knowledge synthesis done by others. They find and read through books, papers, reports, etc and then analyse them in a cross-cutting way to find new patterns and insights. The existence and dominant use of search engines has meant this is generally the mode of knowledge synthesis that we reward. So, we essentially reward someone for taking a next step in knowledge synthesis using their own mind. At the core, I reckon we think of ourselves as mostly rewarding the effort of (i) studying and learning the the existing knowledge and (ii) using our own minds to find something unique.

Relative to search engines, what do LLMs change? I think the fundamental difference that LLMs bring is that they can (i) ‘study and learn’ the existing knowledge and (ii) take any additional steps you’d like (in knowledge synthesis) for you. Search engines have always had a limit — they bring you only the knowledge other people have synthesized and published on the internet. To create something original and unique, one would have to take some additional step in knowledge synthesis. But LLMs can (or seem to be on the path to being able to) synthesize knowledge as deeply as one would like them to. Essentially, they can help you create something original and unique based off what already exists on the internet (or rather, what they’ve been trained on). Of course, there are some caveats here: Their training data may be imperfect for some subject matters, and they still hallucinate or give false information, meaning that you can’t entirely trust their output. Yet in general I think the position holds true, and is bound to become truer as the drive to develop and release more capable models continues.

So what does this mean for which skillset we reward in many educational and hiring situations? Imagine — a few years from now or — at this pace — next year. A large number of people use LLMs which are very capable, barely hallucinate and give accurate information (including sources). Imagine as well a situation in which the companies behind these models begin tiering them and commercializing them according to capabilities. If we are to continue giving outsize attention to final products (a personal statement, a final paper etc) in our assessments of who is deserving, would it mean that we’re simply rewarding the person who types the best prompts into the LLM that they run? Are we alright with that? And if the commercialization comes to be, would that mean that we would reward a person’s privilege? (which allows them to access and run more capable models).

For the latter, one may respond and say that’s what already happens anyway — some people have access to better libraries and teaching than others. This makes them better at knowledge synthesis, but we’ve learnt to live with that. But I would respond that such access doesn’t necessarily lead to a drastic leap in knowledge synthesis capability. That is, one may have access to a lot of material but their ability to work through it and find new insights remains weak. The challenge of very capable LLMs is that they would essentially give a person material and heavy duty knowledge synthesis capability. In any event, some of us pro-fairness people are still not okay with the fairness of current access to libraries and teachers.

For the former, one may argue that to some degree, we already reward those who can type better questions into search engines. There is some truth to this, but I would argue that even in that scenario next-step-on-your-own knowledge synthesis remains by far the dominant aspect that we reward. Typing a question into a search engine produces sources that one must then analyse carefully to produce high-quality output of their own. If LLMs become very capable at knowledge synthesis, the dominant aspect being rewarded would become the prompt typed in and perhaps the final editing of what the LLM produces.

If we accept this to be true, I think LLMs will call for a reckoning about what we ought to reward in educational and hiring settings. Does typing a better prompt into a LLM require a skill that we are happy to reward in an outsize way? What does this mean for how we value original thinking from one’s own mind? And how should we respond to any tiered commercialization of LLMs? More on these questions in my future posts!

--

--

Cecil Abungu

Social science researcher interested in a range of subjects.