Abstract for: Toward a more effective strategy for distinguishing AI bots from human responses in survey research
This work aims to determine how bot-generated responses are different from human-generated responses in data collection systems as advances by researchers and bot algorithm creators. However, if bot-generated response behavior differs from human-generated responses, beyond just the data that is generated, that presents a potential new means of identifying bot-generated. Survey research datasets with indicators of likely bot and human responses are used to generate reference modes for the bot and human responses, which are analyzed and used to develop a system dynamics model. The model is compared against reference modes in behavior reproduction test and used to develop survey strategies that exploit limitations in the underlying AI feedback structure with an emphasis on identifying sustainable strategies for survey research. As expected with surveys, there is an initial spike in responses, but these slow over time. Bot responses suggest higher engagement early on in the data collection process, while human have a more gradual progress. It appears that bot and human responses may be incentivized differently. These preliminary results indicate significant differences in bot-generated and human-generated responses. With advancements in AI, traditional methods of bot-generated data identification become less effective, and assessing the behavior patterns of response type may emerge as a compelling approach to identifying bot-generated data. The implications of this work include highlighting the challenges in bot identification and offering a framework to analyze their impact, and improve data integrity in survey research.