Publication: Using Large Language Models for Qualitative Analysis can Introduce Serious Bias
dc.contributor.author | Ashwin, Julian | |
dc.contributor.author | Chhabra, Aditya | |
dc.contributor.author | Rao, Vijayendra | |
dc.date.accessioned | 2023-11-08T15:40:46Z | |
dc.date.available | 2023-11-08T15:40:46Z | |
dc.date.issued | 2023-11-08 | |
dc.description.abstract | Large Language Models (LLMs) are quickly becoming ubiquitous, but the implications for social science research are not yet well understood. This paper asks whether LLMs can help us analyse large-N qualitative data from open-ended interviews, with an application to transcripts of interviews with displaced Rohingya people in Cox’s Bazaar, Bangladesh. The analysis finds that a great deal of caution is needed in using LLMs to annotate text as there is a risk of introducing biases that can lead to misleading inferences. Here this refers to bias in the technical sense, that the errors that LLMs make in annotating interview transcripts are not random with respect to the characteristics of the interview subjects. Training simpler supervised models on high-quality human annotations with flexible coding leads to less measurement error and bias than LLM annotations. Therefore, given that some high quality annotations are necessary in order to asses whether an LLM introduces bias, this paper argues that it is probably preferable to train a bespoke model on these annotations than it is to use an LLM for annotation. | en |
dc.identifier | http://documents.worldbank.org/curated/en/099433311072326082/IDU09959393309484041660b85d0ab10e497bd1f | |
dc.identifier.doi | 10.1596/1813-9450-10597 | |
dc.identifier.uri | https://openknowledge.worldbank.org/handle/10986/40580 | |
dc.language | English | |
dc.language.iso | en | |
dc.publisher | World Bank Washington, DC | |
dc.relation.ispartofseries | Policy Research Working Papers; 10597 | |
dc.rights | CC BY 3.0 IGO | |
dc.rights.holder | World Bank | |
dc.rights.uri | https://creativecommons.org/licenses/by/3.0/igo/ | |
dc.subject | LARGE LANGUAGE MODELS (LLMS) | |
dc.subject | SOCIAL SCIENCE RESEARCH | |
dc.subject | ROHINGYA PEOPLE | |
dc.subject | TEXT AS DATA | |
dc.subject | CHATGPT | |
dc.subject | QUALITATIVE ANALYSIS | |
dc.subject | LLAMA 2 | |
dc.subject | MACHINE BIAS | |
dc.subject | ANNOTATION | |
dc.title | Using Large Language Models for Qualitative Analysis can Introduce Serious Bias | en |
dc.type | Working Paper | |
dspace.entity.type | Publication | |
okr.crossref.title | Using Large Language Models for Qualitative Analysis can Introduce Serious Bias | |
okr.date.disclosure | 2023-11-07 | |
okr.date.lastmodified | 2023-11-07T00:00:00Z | en |
okr.doctype | Policy Research Working Paper | |
okr.doctype | Publications & Research | |
okr.docurl | http://documents.worldbank.org/curated/en/099433311072326082/IDU09959393309484041660b85d0ab10e497bd1f | |
okr.guid | 099433311072326082 | |
okr.identifier.docmid | IDU-99593933-9484-4166-b85d-ab10e497bd1f | |
okr.identifier.doi | 10.1596/1813-9450-10597 | |
okr.identifier.doi | http://dx.doi.org/10.1596/1813-9450-10597 | |
okr.identifier.externaldocumentum | 34192813 | |
okr.identifier.internaldocumentum | 34192813 | |
okr.identifier.report | WPS10597 | |
okr.import.id | 2248 | |
okr.imported | true | en |
okr.language.supported | en | |
okr.pdfurl | http://documents.worldbank.org/curated/en/099433311072326082/pdf/IDU09959393309484041660b85d0ab10e497bd1f.pdf | en |
okr.region.country | Bangladesh | |
okr.topic | Information and Communication Technologies::ICT Applications | |
okr.topic | Information and Communication Technologies::ICT Policy and Strategies | |
okr.topic | Macroeconomics and Economic Growth::Economic Theory & Research | |
okr.unit | DECRG: Poverty & Inequality (DECPI) | |
relation.isSeriesOfPublication | 26e071dc-b0bf-409c-b982-df2970295c87 | |
relation.isSeriesOfPublication.latestForDiscovery | 26e071dc-b0bf-409c-b982-df2970295c87 |
Files
License bundle
1 - 1 of 1