This content has been archived. It may no longer be relevant
Back in February, we began work on the „Narratives of the Tech War“ project – we being Kai Oppermann and Jakob Landwehr-Matlé of the Technical University of Chemnitz as well as myself. Funded by the German Foundation for Peace Research, we conduct a pilot study into how narratives of technological leadership in Artificial Intelligence influence and are influenced by great-power relations. We compare policy narratives in China, the US, and the EU to see how these actors construct their narratives, how these narratives shape policy, and how actors react to each other’s narratives.
Creating the codebook
Narrative research is closely related to other forms of textual analysis like discourse analysis or content analysis (see here for another example from our research). As in these other methods, we first developed a codebook around the narrative elements of setting, characters, and plot.
- The setting tells us about the world in which the narrative plays out. Here, we look for actors telling us about their views of the international system – is it competitive, conflictual, cooperative? – and of AI – is it an economic or a security technology? Is it singularly disruptive?
- The characters are the other actors inhabiting the setting. For instance, when coding Chinese texts, we look for portrayals of the EU and the US, their supposed motivations and actions.
- The plot is what happens during the narrative. This includes constructions of the self – what do „we“ need technological leadership for? And through which measures should we pursue it?
The codebook went through three versions. Between each, we triple-coded a small subset of documents and then compared our coding decisions. This allowed us to gauge where the codebook needed revision. We tinkered with the categories and developed more detailed coding guidelines for some until we were satisfied that a sufficient degree of reliability was achieved.
Selecting texts
In parallel, we constructed a corpus of relevant documents. Our focus was on central government documents (strategies, policy papers, reports) pertaining to AI in general, not specific AI applications (e.g. smart cities, self-driving cars) or AI papers from specific policy fields (e.g. education). The only exception here were AI papers from defence and security communities because this topic is of special importance to great-power relations and already has specific technologies (autonomous weapon systems components) and discursive frames (the „AI Arms Race“) that are of particular interest to us. We later snowballed from this initial sample and the secondary literature to identify further documents missed in the first round of research. Since none of us speak Chinese, we are limited to English translations of Chinese documents, which is offered by some US think tanks. This obviously creates a potential for bias, both in terms of document selection as well as in the linguistic choices during translation, which we need be critically reflect.
I will give more details on the coding process in future blog posts. If you have any thoughts or questions, please leave a comment.
Schreibe den ersten Kommentar