Whether real or myth, the perception of the publish-or-perish phenomenon affects the reality of faculty members in higher education institutions (HEIs). Even in HEIs that don’t explicitly tie publication to job security, there is a cultural and internal pressure to research and publish (Waaijer et al., 2018). The pitfalls of the publish-or-perish culture include issues around ethical concerns (Hourneaux et al., 2024) and salami science, the practice of dividing a research study into many parts for the purpose of maximizing the quantity of scholarship dissemination (Pfleegor et al., 2019). Across many fields, there is a call for a shift from the publish-or-perish culture to one of publish-and-innovation (van Dalen, 2021), and publish-and-social impact (Waters, 2024). These shifts would ostensibly shift from publishing for quantity to publishing for quality. Kiai (2019) exhorted researchers to prioritize methodological rigor over prolific publishing.
With the advent of Artificial Intelligence (AI) tools that presumably make the process of generating papers easier, the marriage of quality and quantity in faculty scholarship may be possible. The use of AI tools in many fields has had its share of controversy in terms of ethics (Benefo et al., 2022), usefulness (Garcia, 2024), trustworthiness (Yang & Wibowo, 2022), etc. but as more tools become available and more of the population use it in many contexts, the question for faculty may not be a matter of if and instead a matter of which, when, and how.
AI tools are increasingly embedded in research workflows, representing a trending shift in how higher education faculty researchers create, analyze, and disseminate knowledge. From literature review automation to data analysis, writing, and editing assistance, AI applications are being integrated into the research process. At the same time, the publish-or-perish culture continues to shape faculty’s career development. Understanding how these two phenomena intersect is important for developing appropriate policies, supporting faculty well-being, and maintaining research integrity.
Publish-or-Perish and Its Effect on Research
The phrase publish or perish has long been part of academic life, and while the emphasis on research productivity has historical roots in the professionalization of academia, recently, there has been an intensification of publication expectations. De Rond and Miller (2005) examined the evolution of publication pressure, noting that the practice began as a way for scholars to disseminate knowledge and to ensure quality. This practice has unfortunately been transformed into a metric-driven system where quantity often overshadows quality.
This tension between research quantity and quality has been a prime concern in discussions of publication culture. Muborak (2024) found that publication pressures adversely affect research quality. In a review of the publish-or-perish paradigm, Rawat and Meena (2014) argued that this has led to questionable research practices, including salami-slicing of research findings, duplicate publication, and prioritizing incremental work over innovative scholarship.
Fanelli (2010) found that strong publication pressures in competitive academic environments were associated with higher rates of questionable research practices. Similarly, Tijdink et al. (2014) found a direct link between publication pressure and compromised research integrity. Taken together, these findings suggest that publish-or-perish environments may not be conducive to the production of quality research.
Artificial Intelligence Technologies in Research
In recent years, machine learning, natural language processing, and automated analysis systems have made AI tools increasingly accessible to researchers across disciplines, changing how scholarly work is done. Hajkowicz et al. (2023) conducted a bibliometric analysis and found that AI applications have rapidly diffused from specialized computational domains such as computer science to broader adoption across the social sciences and humanities. More recently, Karjus (2025) documented how generative AI is increasingly being adopted as it gains the ability to undertake complex qualitative tasks.
Researchers have benefited from integrating AI into their work. Within the research workflow, the literature review is one particular part that has been transformed by AI capabilities. Marshall and Wallace (2019) discussed how AI tools have helped to speed up the process of systematic reviews, which is one of the most time-intensive activities in research. Additionally, the integration of AI into qualitative research methods represents an emerging frontier. Bryda and Sadowski (2024) showed how AI can support bottom-up thematic analysis of interview transcripts using inductive coding.
Though there are many benefits to AI, many researchers promote caution. Van Dinter et al. (2021) underscored the gains and the risks of overreliance on AI tools, which may miss nuanced information. For qualitative research specifically, Marshall and Wallace (2019) emphasized that human interpretation remains essential for meaning-making. Across research methodologies, Perkins and Roe (2024) examined the use of AI tools in quantitative and qualitative research and discussed how these tools enhanced research productivity but cautioned how their integration raises questions about research integrity, security, and authorship.
Faculty Perspectives on AI Use in Research: Benefits and Barriers
Recent survey studies of HEI faculty have found that adoption rates varied significantly by gender (Brown et al., 2025), discipline (Spathopoulou et al., 2025), and career stage (Mohammadi et al., 2026). Grassini (2023) investigated attitudes toward AI among academic staff, identifying a complex mixture of optimism and concern. While faculty recognized the potential efficiency benefits, many expressed uncertainties about appropriate use boundaries, reliability, and the implications for their professional roles. Faculty members who have adopted AI tools in research contexts cite efficiency gains as a primary benefit. Bolanos et al. (2024) analyzed the latest AI-assisted literature review tools and demonstrated their ability to process larger volumes of literature more quickly, identify relevant studies that might otherwise be missed, and stay current with rapidly expanding fields.
Despite potential benefits, concerns about AI use in research persist among faculty, including academic integrity. Perkins (2023) surveyed faculty regarding AI and academic integrity, finding widespread worry that AI tools could facilitate plagiarism, obscure attribution, and undermine the authenticity of scholarly work. Many participants expressed uncertainty about where to draw the line between acceptable assistance and inappropriate use.
The trust and reliability of AI outputs constitute additional significant barriers. Shata’s (2025) study found that faculty expressed concerns about the accuracy of AI-generated results. In addition, the reproducibility crisis in machine learning demonstrates that many AI systems produce inconsistent or erroneous results (Ball, 2023). A lack of preparation for effective AI use emerged as a significant barrier to adoption across multiple studies. A limited familiarity with AI tools emerged as a core barrier, showing that faculty often felt not ready (Shata, 2025), lacked confidence in their critical evaluation skills to critique AI outputs (Trung, 2025), and had received few, if any, formal training opportunities in AI applications relevant to their research (Shata, 2025). This skills gap created anxiety and reluctance to experiment with available tools.
The Intersection: AI Adoption and Publication Pressure
The relationship between publication pressure and AI adoption represents an underexplored but critical area of inquiry. Evidence suggests that faculty experiencing high publication demands may be particularly motivated to adopt AI tools as productivity enhancers. While comprehensive empirical studies directly examining this relationship remain limited, existing research provides suggestive evidence. For example, Heaven (2022) reported on researchers’ use of AI writing tools, noting that time pressure and productivity demands were frequently cited motivations for adoption. Several interviewed researchers explicitly connected their AI use to the need to maintain competitive publication rates in resource-constrained environments.
Research Gap
In general, the purpose of this study is to explore, qualitatively, the intersection of AI and the publish-or-perish phenomenon for HEI faculty. There are critical gaps in the literature on this relationship. We aim to contribute to the emerging yet still limited literature on HEI faculty and their perceptions and use of AI-assisted qualitative tools.
Research Questions
Three research questions framed this study:
-
What knowledge, if any, do HEI faculty members have regarding the use of AI qualitative analytic tools to conduct research?
-
After training on the software, how do HEI faculty members believe it will impact their research?
-
In what areas do the trained HEI faculty members believe the software would be helpful to them?
Methodology
The methodology for this exploratory study was basic qualitative research as described by Merriam and Tisdell (2015). Much of early AI research revolves around people’s general perceptions on the use of AI in teaching, learning, and research. As a logical next step, the purpose of this study was to explore how higher education faculty felt about using an AI analysis tool in their research work before and after using a specific qualitative data analysis tool (Intellectus 2.0).
Intellectus Qualitative (2026, also known as Intellectus 2.0) is an AI-driven qualitative analysis platform. Users upload their textual data, and the tool can perform automatic inductive and deductive coding, thematization, research question alignment, memoing (tracking analytic process with timestamps), and report production (including theme descriptions with corresponding excerpts). Intellectus 2.0 also has the capability to transcribe non-textual data (e.g., audio/video).
Participants
We used purposive sampling techniques to recruit participants. Eligible participants had to be a higher education institution faculty member (of any status) and they must have conducted at least one qualitative research study. We sent announcements via professional organization newsletters (e.g., the American Educational Research Association), university faculty meetings, and our professional networks. Potential participants were informed they would need to do a self-paced training on Intellectus 2.0 by a certain date (within our data collection timeframe).
A total of 32 participants responded to our invitation to participate in the study; 30 met eligibility criteria, and of these, 23 provided consent and completed the initial phase of data collection (a questionnaire). This initial phase provided pre-training data. For the second phase of data collection, the questionnaire respondents were invited via email to train on the AI-assisted qualitative analysis tool and to participate in an interview session. As an incentive, interview participants were given access to the AI tool for approximately six months after the data collection timeframe. Eight participants accepted our invitation, but due to various reasons (e.g., inability to train on the tool, scheduling conflicts), five were interviewed.
Instruments and Data Collection
Two data collection instruments were used in the study: a questionnaire and semi-structured interviews. The questionnaire, hosted on the SurveyMonkey platform, included three parts: an eligibility criteria section, a consent-to-participate section, and a main body with 21 questions (Likert-scale, multiple-choice, checkboxes, short-answer). The main body of the questionnaire was also divided into sections: AI tools experience, research output, demographic information, and contact information (see Appendix A). Participants accessed the questionnaire through a link provided in the recruitment materials. The questionnaire took less than 10 minutes to complete.
The AI tools section provided descriptive data to answer the first research question and, along with the research output and demographic information sections, data on participants’ characteristics. The last section invited them to the second phase of data collection. Descriptive statistics (e.g., means/frequencies) from the questionnaire were used to supplement and contextualize our qualitative findings. As such, the questionnaire data helped to establish a baseline from which the participants started the Intellectus 2.0 training. It enabled us to analyze changes in perspectives on AI tool use in research after the training process. For example, an interview question asked about barriers and asked whether their previous responses could be overcome by AI tools. In addition, participant characteristics, such as their experience with qualitative research and AI tools, could help us contextualize our interview findings.
The second data collection instrument was a semi-structured interview (see Appendix B). The interview protocol included questions about their opinions on the Intellectus training, general AI tool use, its efficacy, potential effects of AI tools on their research output, and the ethical implications of AI tool use in research. Additional questions were only asked during the interview to clarify or elaborate.
Over five months, we sent multiple emails to all participants. The initial email included an access link to Intellectus 2.0 and requested that they contact us once they had completed the training. Eight participants responded. Five of them went through self-guided training. Training resources provided by Intellectus included a monthly qualitative session, weekly overview sessions, an FAQ list, and a resource video library. There were no requirements on which resources participants used for their training. We only required that they use their own data for exploring the tool. Once they felt that they had sufficiently learned the software, they contacted us for an interview. The principal investigator, Dr. Nubla-Kung, scheduled and conducted Zoom interviews averaging 35 minutes.
Data Analysis
For the questionnaire data, the process of analysis began with exporting the data from the SurveyMonkey platform into an Excel sheet. The data were analyzed using descriptive analysis. Specifically, means and percentages were calculated for the questionnaire items.
For the interview data, transcripts were created by the Zoom platform and were checked for accuracy. These transcripts were uploaded into Intellectus 2.0, which provided categories and themes. Additionally, to increase validity, Dr. Rankin analyzed the transcripts using thematic analysis, starting with line-by-line coding, then focused codes, categories, and finally themes. Then all the researchers came together to review both sets of codes and themes. This collaboration allowed for investigator triangulation, which involves the participation of two or more researchers in the same study to provide multiple observations and conclusions (Flick, 2022). This type of triangulation can bring both confirmation of findings and different perspectives, adding breadth to the phenomenon of interest (Carter et al., 2014; Flick, 2022). Ultimately, we found that while the AI did not do coding well, some of the themes it developed were insightful and allowed us to reflect on our own observations.
Findings
Participant Characteristics
The participants in the first phase of data collection numbered 23 HEI faculty members. This sample spanned a range of higher education roles (lecturers, professors (assistant and full), deans, program directors, dissertation chairs, and advisors) representing 12 public and five private higher education institutions. Questionnaire participants ranged in experience in their HEI roles from less than a year to more than 10 years. In terms of research output, most published their work in journals an average of one to two times per year and presented at conferences at the same rate.
From the questionnaire data, we found that most of our participants had experience with AI tools in general, ranging from “little” (43%) to “moderate” (48%). In terms of their research work, 5% used AI tools “a lot”, 18% used them a “moderate amount”, 64% “a little”, and 14% “none at all”. For those who used AI tools in their research work, an equal percentage (64%) used them for editing and finding citations, whereas 41% and 36% of them used AI tools for writing and data analysis, respectively. Other reported uses for AI tools included preparing proposals, writing code, planning, paraphrasing, and cross-checking. Regarding AI use in the future, a majority (89%) of participants believed that AI tools could help them conduct more research, especially to circumvent the “lack of time” barrier.
The five interview participants represented two public R1 and one private RCU institutions, where they served mainly as program directors (1) and taught as adjuncts (1), assistants (1), and full professors (2). These participants were highly experienced: two had been in their role for 6-10 years, and three for more than 10 years. In terms of research output, it followed the same pattern as those in the full sample, publishing and presenting at conferences one to two times per year on average. None of them had experience with Intellectus 2.0 before.
Interview Findings
After analyzing the data collected in the interviews, two themes emerged: The Experience of Learning and Using an AI Program for Qualitative Research and Considering the Future Use of AI Programs for Qualitative Research (See Table 1).
Theme 1: The Experience of Learning and Using an AI Program for Qualitative Research
The following categories were used to develop this theme: AI Adoption Through Experiential Understanding, Complementary by Imperfect Assistant, and Balancing Efficiency and Familiarity.
AI Adoption through Experiential Understanding. All but one participant completed the full training, which consisted of a series of videos demonstrating how to use the AI program. The four who completed it found it helpful, enabling them to use all the program’s features. John stated, “it answered my basic questions. It did more than I thought it could do, based on, you know, my own just dabbling. I got more out of it. I saw that it could do more.” The participant who didn’t complete the training expressed a desire to learn by using the system. Clay said, “The interface was very intuitive, so I loaded up some data I had from a prior study, and it was super easy to use it… the only how-to or training I did was how to set up my account.” She went on to say:
[Later] I wanted to go and look at the training videos. I could not find them. They’re very difficult to find. So, I went to look for them, and… it took me too long to find them, so I’m like, I’ll just see if I can figure this out on my own.
Like Clay, Claire wanted to jump straight in and use it, but she did go through the training, saying, “the experience was fine. I had to go through some steps and boxes, but honestly, I just wanted to get my hands dirty with just the data set I had already done.”
Complementary but Imperfect Assistant. The participants viewed the AI program as helpful as a limited assistant rather than as a replacement for human analysis. Clay said it best, “I felt like there’s definitely a place for it, but I definitely [think], at this point, anyway, it’s not like it could replace a human.” John had a more detailed breakdown of where the software didn’t do enough. He said,
I was trying to look at organizational resiliency specifically, in terms of COVID, and so I had done a bunch of COVID interviews, and I really was interested in if it [the AI] could, pull out various examples of organizational resiliency. And I think it did an okay job… it gave me a lot of stuff that was about resiliency. The problem was more about the project itself, and I couldn’t differentiate it from COVID. There’s no, at least that I could see, no way to instruct it like I could with ChatGTP, like, no, no, no, don’t… only do it with COVID-related stuff.
The inability to be more specific through a command meant that John would have to do more manual sorting of the data that the software included in its analysis. In contrast to John, Claire found the software faster than if she had done it herself. She said,
It took a lot less time than if I was doing it myself. The way it came up with the themes wasn’t my wording, or the way I would word it, or communicate it… so if I would have used it and come up with those themes. I probably would have reworded it.
Gary felt similar to Claire and stated he would probably use this software to help his faculty do more research, which is further discussed in the next theme.
Theme 2: Considering the Future Use of AI Programs for Qualitative Research
While the first part of the interview focused on the experience of using the program, the second half focused on the participants’ reflections on the implications of using the software in the future. This theme came from that part of the interview and had two categories: Future AI Use in Research and Ethics of AI in Research.
Future AI Use in Research. Each participant felt they would use the software or other AI software in the future when analyzing research. Gary thought that it would be a good tool to help faculty publish as required by his accrediting body. When asked about how it would help, he said,
Coding takes a long time…freaking a long, long time. It was an Excel spreadsheet with about 12 sheets that kept getting smaller and smaller…it was a lot of different views of how words were analyzed. It’s about six months, and then the question is, how long [did the software take]… It kind of… it took a lot of the… the crunching out of it.
Jason felt he might personally continue to use the software if he had more training. He said,
I think if I could learn to use it better, it would help me work faster, because I have no qualms with… I mean, in the end, I don’t really have any qualms with… [software]…helping me create my categories. If I already know what my categories should be, more or less…And it doesn’t create a lot of noise, and it just helps me get to that place where then I can do the fine-tuning.
Clarie suggested a similar idea, stating she would probably use it to supplement her own analysis.
Ethics of AI in Research. While discussing future use, the faculty also considered some ethical implications of the AI software, including its use by students. All the participants felt that using AI in research analysis was ethical if the researcher knew their data. Jason summed it up best,
I’m not sure if it rises to the level of being unethical. But you have to know what’s in your data, I feel like, but, like I said, it’s so seductive. Once you know it can do this, you just throw your data in there and say, go at it, and then tell me what you found. I think without even having to read the interview.
The other participants felt the same. They also felt that students might be tempted in this way.
The biggest concern expressed about student use, especially at the doctoral level, is that students will not learn the skills they need to ensure that when they do use AI, they will know what it did was correct. When discussing students using the software, Gary said,
Well, I want students to use it [and] I want students to manually code. I also want them to learn how to use AI. I would prefer… you can’t go as far as saying I demand…that they manually code and then use the AI to validate. Determine if there’s any differences, and that helps them to. Critically analyze the AI today, and what to look for in the future.
Claire brought up another ethical concern about disclosure,
from a business perspective, because I’m a business person. We used to do financial statement analysis and banking. And we use tools to spit out an analysis. And I think if someone just had it spit it out, and that was their analysis. They should disclose that to whoever they’re providing the analysis to. They should say, hey, I use this, and this is the analysis it gave me.
The disclosure of AI use was a common concern among all participants.
Discussion
The findings presented in the preceding section illuminate the complex landscape of faculty perceptions and experiences with AI qualitative analytic tools in higher education institutions. These results, derived from in-depth exploration of participants’ knowledge, expectations, and practical experiences, require careful interpretation within the broader context of technology adoption in academic research. The following sections are organized to address our three research questions, as well as the study’s limitations, recommendations for future research, and implications for practice and policy
What knowledge, if any, do HEI faculty members have regarding the use of AI qualitative analytic tools to conduct research?
Questionnaire data showed that our participants’ knowledge and experience with AI tools in their research were little to moderate. This aligns with previous survey data showing that 40% of global faculty were beginning their AI literacy journey (Digital Education Council, 2024). All participants in the study indicated they had no experience with qualitative AI software; however, they were familiar with AI chat tools such as ChatGPT, Gemini, and Claude. One interview participant, Jason, indicated that they had used the software NVivo many years earlier to analyze data, but that the software didn’t have an AI component.
After training on the software, how do HEI faculty members believe it will impact their research?
The theme The Experience of Learning and Using an AI Program for Qualitative Research answers this research question by revealing that the faculty members’ knowledge of AI qualitative analytic tools is predominantly developed through hands-on experimentation rather than formal training. It demonstrates that when the faculty engage with these tools, they tend to build their understanding through direct application and self-guided exploration, leveraging their existing qualitative research expertise. The theme suggests that faculty knowledge is characterized by an experiential learning approach, in which they develop competency through practical application and autonomous learning, seeking specific guidance only when necessary. This indicates that the faculty’s knowledge of AI tools is often intuitive and practice-based rather than theoretical or comprehensively structured.
The participants’ perception of AI as a valuable but limited tool could be due to the hands-on approach described above. However, the data shows that the faculty still believe AI software could enhance research efficiency for routine tasks while requiring significant human oversight and expertise. This balanced perspective suggests faculty are developing a nuanced understanding of how to integrate AI tools into their qualitative research workflows without compromising methodological rigor or analytical depth, recognizing that while AI can support and expedite specific processes, human researchers must remain the primary analytical agents who bring theoretical sophistication and interpretive judgment.
In what areas do the trained HEI faculty members believe the software would be helpful to them?
The training and use of the software demonstrated that the participants recognized the potential for AI to streamline workflows and increase research productivity. They also acknowledged the need to maintain human oversight of AI-generated outputs. Their anticipated adoption appears contingent on how well AI software aligns with their existing technological competencies and whether the efficiency gains clearly outweigh the learning curve, suggesting that impact will vary based on individual technological adaptability and the perceived value-to-effort ratio.
The disconnect between AI-generated outputs and researchers’ analytical intentions raises more profound questions about the compatibility between algorithmic processing and interpretive inquiry. While existing literature has explored AI’s potential to democratize qualitative analysis, this study suggests that such democratization may come at the cost of conceptual precision. The implications for practice are substantial, indicating that successful AI integration requires not merely technical training but enhanced emphasis on foundational qualitative concepts. For research education, this finding suggests the need to redesign the curriculum to strengthen conceptual understanding before introducing technological tools. The study contributes to theoretical knowledge by revealing how AI tools may create new forms of analytical dependency that require careful pedagogical consideration.
Positioning AI as an assistant rather than a replacement reflects a mature understanding of qualitative inquiry’s interpretive demands and suggests that concerns about AI replacing human researchers may be overstated. Conceptually, this finding illuminates how professional researchers navigate technological adoption through pragmatic evaluation rather than wholesale acceptance or rejection. The tension between efficiency gains and the preservation of critical thinking reveals more profound questions about the nature of qualitative expertise and how it might be augmented rather than automated.
This perspective aligns with broader discussions in qualitative methodology about maintaining analytical rigor while embracing technological innovation. The implications for practice suggest that effective AI integration requires explicit frameworks for human oversight and quality control. For pedagogical development, this finding indicates the need for training approaches that emphasize complementary rather than competitive relationships between human and artificial intelligence. The study’s contribution lies in demonstrating how experienced researchers develop sophisticated frameworks for technology evaluation that balance pragmatic benefits with methodological integrity, offering a model for responsible AI adoption in qualitative research.
The dynamics of AI adoption revealed through this theme illuminate broader patterns of technology integration within academic environments, addressing research questions about perceived impact through the lens of organizational and individual change processes. The tension between efficiency gains and system familiarity reflects classic innovation adoption challenges, where perceived usefulness must overcome established workflow preferences and technological comfort zones.
This finding reveals that AI adoption in research contexts is not merely a technical decision but involves complex negotiations between professional identity, methodological tradition, and technological possibility. The significance of prior technological experience as a moderating factor suggests that AI integration success depends heavily on users’ existing digital competencies and attitudes toward technological change. This finding challenges assumptions about uniform patterns of AI adoption and highlights the importance of individualized approaches to technology integration. The relationship between ease of use and adoption willingness aligns with established technology acceptance frameworks while revealing unique considerations for research tools that require both technical proficiency and methodological understanding.
For practice, this suggests that successful AI implementation requires attention to user experience design and gradual integration strategies that build on existing competencies. The implications for institutional support include the need for differentiated training approaches that account for varying levels of technological comfort and experience. This study contributes to the understanding of academic technology adoption by revealing how research-specific considerations shape general technology acceptance patterns, particularly the importance of maintaining methodological integrity while pursuing efficiency gains.
Limitations and Recommendations for Future Research
We were limited in the number of participants we could recruit for the study because of the number of individual subscriptions (25) we had for Intellectus 2.0. We expected all those who completed the initial phase of data collection to train on the AI tool and be interviewed. However, only a fraction of questionnaire participants opted for the second phase. The small number of participants for the second phase of data collection was a surprise especially with the incentive of gaining free access to an AI-driven research tool for six months. Given that a lack of time was considered a barrier to research output, it’s surprising that participants didn’t choose to learn about a tool that may save them time on data analysis, a famously time-consuming part of the research workflow for qualitative researchers. A possible explanation may come from Shata’s (2025) finding that faculty were reluctant to adopt AI tools partly due to anxiety from a lack of formal training in such tools. Perhaps the self-paced, online, and informal nature of the training on Intellectus 2.0 didn’t provide enough structure and would require too much initiative and time to complete. It would be interesting to discover the reasons of those who opted out of the interview in future research.
Regardless, while generalizability was not the purpose in this exploratory study, had we known the disparity in the number of participants between the first and second phases of data collection, we would have recruited a much larger number of participants for the questionnaire, thereby strengthening the trustworthiness of the data. If we replicate this study in the future, we will recruit more participants.
Policy Recommendations
Based on the research conducted here, along with our own experience using the qualitative AI software, we recommend that leaders in higher education institutions consider adopting policies on the use of qualitative AI tools to conduct research. While qualitative analysis tools like NVivo have existed for years, AI is going further by taking out all aspects of the human experience. The training for the AI tool Intellectus 2.0 emphasizes the need for the researcher to review all data and has tools to allow the researcher to examine how the themes were created, including going back to the original transcript data. We highly agree with that advice as we found that the AI coding didn’t always capture the participant’s meaning in their use of words or phrases. This aligns with conclusions by other researchers, who caution about AI missing nuanced information (van Dinter et al., 2021), and the need for human interpretation for meaning making (Marshall & Wallace, 2019). We believe that, while the AI program is a great time-saver and can provide a different perspective, nothing replaces the researcher immersing themselves in the data. Qualitative research is ultimately a human experience understood and interpreted by humans. Therefore, we recommend that higher education institutions create policies and expectations that faculty must disclose which AI tool they used, how the tool was used, and how the output was verified by the researcher. This will create transparency and accountability.
Conclusion
The landscape of academic research in higher education is undergoing a transformation driven by rapid technological advancement and continued productivity pressures. Faculty members face pressure to publish scholarly work while simultaneously managing teaching responsibilities, administrative duties, and service commitments. The limited literature on the intersection of publication expectations and emerging artificial intelligence (AI) technologies presented a clear opportunity to fill that research gap. Our findings suggest that while faculty members appreciate the efficiency gains AI tools can provide to their research workflows, they also harbor healthy skepticism about the outputs of these tools. All of those we interviewed said that researchers must have a fundamental understanding of research processes to analyze the accuracy and quality of what AI tools generate. Overall, this study contributes to the emerging yet still limited literature on HEI faculty pressures and their perceptions and use of AI-assisted qualitative tools.
