Using large language model annotations for valid downstream statistical inference in social science: Design-based semi-supervised learning N Egami, M Hinck, BM Stewart, H Wei arXiv preprint arXiv:2306.04746, 2023 | 7 | 2023 |
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models P Röttger, V Hofmann, V Pyatkin, M Hinck, HR Kirk, H Schütze, D Hovy arXiv preprint arXiv:2402.16786, 2024 | 3 | 2024 |
Do we still need BERT in the age of GPT? Comparing the benefits of domain-adaptation and in-context-learning approaches to using LLMs for Political Science Research M Bosley, M Jacobs-Harukawa, H Licht, A Hoyle | 3 | 2023 |
Using imperfect surrogates for downstream inference: Design-based supervised learning for social science applications of large language models N Egami, M Hinck, B Stewart, H Wei Advances in Neural Information Processing Systems 36, 2024 | 2 | 2024 |
Does microtargeting work? evidence from an experiment during the 2020 united states presidential election M Jacobs-Harukawa Github. https://muhark. github. io/static/docs/harukawa-2021-microtargeting …, 2022 | 2 | 2022 |
Using Large Language Model Annotations for the Social Sciences: A General Framework of Using Predicted Variables in Downstream Analyses N Egami, M Hinck, BM Stewart, H Wei | | |