top of page
Search
Writer's pictureAndre Rosario

How About Privacy? Using AI to Transcribe Qualitative Interviews

For my historical dissertation, I wanted to try utilizing new tools like Otter.ai to transcribe some of my oral-history interviews.


But I was suspicious about privacy and Otter's use of my interviewee's words.


According to Otter's Privacy & Security page, Otter utilizes “a proprietary method to de-identify user data” to train its large language models. The training depends on audio recordings and transcripts that are “not manually reviewed by a human,” and their training data is encrypted.

 

It also states that “no customer data will be used to train or improve our AI Service Provider(s)’ artificial intelligence models/algorithms."


Otter lists its subprocessors, particularly OpenAI. Otter uses it “for evaluating the effectiveness of our Large Language Models (LLM).”


Even after reading these policies, I still wonder if most qualitative researchers feel like they know enough about language models and AI products to evaluate whether they can trust them.


When I applied for formal exemption from the Institutional Review Board for my oral-history interviews, I specified that I was planning on using Otter. In the consent forms that I sent to interviewees, I also stated that I was using Otter. As we met for a pre-interview meeting, I stated that I would use an online AI service to help me make transcripts. Then, I review the transcript myself, using other software like ExpressScribe (and their recommended transcription pedal!).


Have you used AI to transcribe your interviews? What concerns have you had with this new tech, and how have you navigated them?

13 views0 comments

Recent Posts

See All

Comentarios


bottom of page