Nubla-Kung, A., Rankin, P., & Dereshiwsky, M. (2026). Higher Education Faculty Members’ Perceptions of an AI-driven Qualitative Data Analysis Tool for Their Research: An Exploratory Study. Journal of Online Graduate Education, CRI, 26–45. https://doi.org/10.65201/001c.158996

Abstract

The publish-or-perish phenomenon has traditionally been part of faculty professional survival and development in higher education. This, combined with the increasing ubiquity of AI tools in academia, provided the impetus for a study on AI and faculty research in higher education institutions (HEIs). Specifically, we wanted to study how training in using an AI-driven qualitative analysis tool (Intellectus 2.0) would affect participants’ perceptions of their research. Qualitative methodology served as a framework to collect data with questionnaires and interviews to answer three research questions: (1) What knowledge do HEI faculty members have regarding the use of AI qualitative analytic tools to conduct research?; (2) After training on the software, how do HEI faculty members believe it will impact their research?; and (3) In what areas do the trained HEI faculty members believe the software would be helpful to them? Twenty-three HEI faculty members completed the first phase of data collection (a questionnaire); of these, five participated in the second phase (training on an AI-assisted qualitative data analysis tool and an interview). Questionnaire data aligned with current research on faculty members’ limited experience with AI tools for research, while interview data revealed the need for a balance between AI adoption to increase research workflow efficiency and the human expertise needed to analyze AI outputs. The implications of this study are discussed, along with recommendations for future research, policy, and practice in higher education.

Whether real or myth, the perception of the publish-or-perish phenomenon affects the reality of faculty members in higher education institutions (HEIs). Even in HEIs that don’t explicitly tie publication to job security, there is a cultural and internal pressure to research and publish (Waaijer et al., 2018). The pitfalls of the publish-or-perish culture include issues around ethical concerns (Hourneaux et al., 2024) and salami science, the practice of dividing a research study into many parts for the purpose of maximizing the quantity of scholarship dissemination (Pfleegor et al., 2019). Across many fields, there is a call for a shift from the publish-or-perish culture to one of publish-and-innovation (van Dalen, 2021), and publish-and-social impact (Waters, 2024). These shifts would ostensibly shift from publishing for quantity to publishing for quality. Kiai (2019) exhorted researchers to prioritize methodological rigor over prolific publishing.

With the advent of Artificial Intelligence (AI) tools that presumably make the process of generating papers easier, the marriage of quality and quantity in faculty scholarship may be possible. The use of AI tools in many fields has had its share of controversy in terms of ethics (Benefo et al., 2022), usefulness (Garcia, 2024), trustworthiness (Yang & Wibowo, 2022), etc. but as more tools become available and more of the population use it in many contexts, the question for faculty may not be a matter of if and instead a matter of which, when, and how.

AI tools are increasingly embedded in research workflows, representing a trending shift in how higher education faculty researchers create, analyze, and disseminate knowledge. From literature review automation to data analysis, writing, and editing assistance, AI applications are being integrated into the research process. At the same time, the publish-or-perish culture continues to shape faculty’s career development. Understanding how these two phenomena intersect is important for developing appropriate policies, supporting faculty well-being, and maintaining research integrity.

Publish-or-Perish and Its Effect on Research

The phrase publish or perish has long been part of academic life, and while the emphasis on research productivity has historical roots in the professionalization of academia, recently, there has been an intensification of publication expectations. De Rond and Miller (2005) examined the evolution of publication pressure, noting that the practice began as a way for scholars to disseminate knowledge and to ensure quality. This practice has unfortunately been transformed into a metric-driven system where quantity often overshadows quality.

This tension between research quantity and quality has been a prime concern in discussions of publication culture. Muborak (2024) found that publication pressures adversely affect research quality. In a review of the publish-or-perish paradigm, Rawat and Meena (2014) argued that this has led to questionable research practices, including salami-slicing of research findings, duplicate publication, and prioritizing incremental work over innovative scholarship.

Fanelli (2010) found that strong publication pressures in competitive academic environments were associated with higher rates of questionable research practices. Similarly, Tijdink et al. (2014) found a direct link between publication pressure and compromised research integrity. Taken together, these findings suggest that publish-or-perish environments may not be conducive to the production of quality research.

Artificial Intelligence Technologies in Research

In recent years, machine learning, natural language processing, and automated analysis systems have made AI tools increasingly accessible to researchers across disciplines, changing how scholarly work is done. Hajkowicz et al. (2023) conducted a bibliometric analysis and found that AI applications have rapidly diffused from specialized computational domains such as computer science to broader adoption across the social sciences and humanities. More recently, Karjus (2025) documented how generative AI is increasingly being adopted as it gains the ability to undertake complex qualitative tasks.

Researchers have benefited from integrating AI into their work. Within the research workflow, the literature review is one particular part that has been transformed by AI capabilities. Marshall and Wallace (2019) discussed how AI tools have helped to speed up the process of systematic reviews, which is one of the most time-intensive activities in research. Additionally, the integration of AI into qualitative research methods represents an emerging frontier. Bryda and Sadowski (2024) showed how AI can support bottom-up thematic analysis of interview transcripts using inductive coding.

Though there are many benefits to AI, many researchers promote caution. Van Dinter et al. (2021) underscored the gains and the risks of overreliance on AI tools, which may miss nuanced information. For qualitative research specifically, Marshall and Wallace (2019) emphasized that human interpretation remains essential for meaning-making. Across research methodologies, Perkins and Roe (2024) examined the use of AI tools in quantitative and qualitative research and discussed how these tools enhanced research productivity but cautioned how their integration raises questions about research integrity, security, and authorship.

Faculty Perspectives on AI Use in Research: Benefits and Barriers

Recent survey studies of HEI faculty have found that adoption rates varied significantly by gender (Brown et al., 2025), discipline (Spathopoulou et al., 2025), and career stage (Mohammadi et al., 2026). Grassini (2023) investigated attitudes toward AI among academic staff, identifying a complex mixture of optimism and concern. While faculty recognized the potential efficiency benefits, many expressed uncertainties about appropriate use boundaries, reliability, and the implications for their professional roles. Faculty members who have adopted AI tools in research contexts cite efficiency gains as a primary benefit. Bolanos et al. (2024) analyzed the latest AI-assisted literature review tools and demonstrated their ability to process larger volumes of literature more quickly, identify relevant studies that might otherwise be missed, and stay current with rapidly expanding fields.

Despite potential benefits, concerns about AI use in research persist among faculty, including academic integrity. Perkins (2023) surveyed faculty regarding AI and academic integrity, finding widespread worry that AI tools could facilitate plagiarism, obscure attribution, and undermine the authenticity of scholarly work. Many participants expressed uncertainty about where to draw the line between acceptable assistance and inappropriate use.

The trust and reliability of AI outputs constitute additional significant barriers. Shata’s (2025) study found that faculty expressed concerns about the accuracy of AI-generated results. In addition, the reproducibility crisis in machine learning demonstrates that many AI systems produce inconsistent or erroneous results (Ball, 2023). A lack of preparation for effective AI use emerged as a significant barrier to adoption across multiple studies. A limited familiarity with AI tools emerged as a core barrier, showing that faculty often felt not ready (Shata, 2025), lacked confidence in their critical evaluation skills to critique AI outputs (Trung, 2025), and had received few, if any, formal training opportunities in AI applications relevant to their research (Shata, 2025). This skills gap created anxiety and reluctance to experiment with available tools.

The Intersection: AI Adoption and Publication Pressure

The relationship between publication pressure and AI adoption represents an underexplored but critical area of inquiry. Evidence suggests that faculty experiencing high publication demands may be particularly motivated to adopt AI tools as productivity enhancers. While comprehensive empirical studies directly examining this relationship remain limited, existing research provides suggestive evidence. For example, Heaven (2022) reported on researchers’ use of AI writing tools, noting that time pressure and productivity demands were frequently cited motivations for adoption. Several interviewed researchers explicitly connected their AI use to the need to maintain competitive publication rates in resource-constrained environments.

Research Gap

In general, the purpose of this study is to explore, qualitatively, the intersection of AI and the publish-or-perish phenomenon for HEI faculty. There are critical gaps in the literature on this relationship. We aim to contribute to the emerging yet still limited literature on HEI faculty and their perceptions and use of AI-assisted qualitative tools.

Research Questions

Three research questions framed this study:

  1. What knowledge, if any, do HEI faculty members have regarding the use of AI qualitative analytic tools to conduct research?

  2. After training on the software, how do HEI faculty members believe it will impact their research?

  3. In what areas do the trained HEI faculty members believe the software would be helpful to them?

Methodology

The methodology for this exploratory study was basic qualitative research as described by Merriam and Tisdell (2015). Much of early AI research revolves around people’s general perceptions on the use of AI in teaching, learning, and research. As a logical next step, the purpose of this study was to explore how higher education faculty felt about using an AI analysis tool in their research work before and after using a specific qualitative data analysis tool (Intellectus 2.0).

Intellectus Qualitative (2026, also known as Intellectus 2.0) is an AI-driven qualitative analysis platform. Users upload their textual data, and the tool can perform automatic inductive and deductive coding, thematization, research question alignment, memoing (tracking analytic process with timestamps), and report production (including theme descriptions with corresponding excerpts). Intellectus 2.0 also has the capability to transcribe non-textual data (e.g., audio/video).

Participants

We used purposive sampling techniques to recruit participants. Eligible participants had to be a higher education institution faculty member (of any status) and they must have conducted at least one qualitative research study. We sent announcements via professional organization newsletters (e.g., the American Educational Research Association), university faculty meetings, and our professional networks. Potential participants were informed they would need to do a self-paced training on Intellectus 2.0 by a certain date (within our data collection timeframe).

A total of 32 participants responded to our invitation to participate in the study; 30 met eligibility criteria, and of these, 23 provided consent and completed the initial phase of data collection (a questionnaire). This initial phase provided pre-training data. For the second phase of data collection, the questionnaire respondents were invited via email to train on the AI-assisted qualitative analysis tool and to participate in an interview session. As an incentive, interview participants were given access to the AI tool for approximately six months after the data collection timeframe. Eight participants accepted our invitation, but due to various reasons (e.g., inability to train on the tool, scheduling conflicts), five were interviewed.

Instruments and Data Collection

Two data collection instruments were used in the study: a questionnaire and semi-structured interviews. The questionnaire, hosted on the SurveyMonkey platform, included three parts: an eligibility criteria section, a consent-to-participate section, and a main body with 21 questions (Likert-scale, multiple-choice, checkboxes, short-answer). The main body of the questionnaire was also divided into sections: AI tools experience, research output, demographic information, and contact information (see Appendix A). Participants accessed the questionnaire through a link provided in the recruitment materials. The questionnaire took less than 10 minutes to complete.

The AI tools section provided descriptive data to answer the first research question and, along with the research output and demographic information sections, data on participants’ characteristics. The last section invited them to the second phase of data collection. Descriptive statistics (e.g., means/frequencies) from the questionnaire were used to supplement and contextualize our qualitative findings. As such, the questionnaire data helped to establish a baseline from which the participants started the Intellectus 2.0 training. It enabled us to analyze changes in perspectives on AI tool use in research after the training process. For example, an interview question asked about barriers and asked whether their previous responses could be overcome by AI tools. In addition, participant characteristics, such as their experience with qualitative research and AI tools, could help us contextualize our interview findings.

The second data collection instrument was a semi-structured interview (see Appendix B). The interview protocol included questions about their opinions on the Intellectus training, general AI tool use, its efficacy, potential effects of AI tools on their research output, and the ethical implications of AI tool use in research. Additional questions were only asked during the interview to clarify or elaborate.

Over five months, we sent multiple emails to all participants. The initial email included an access link to Intellectus 2.0 and requested that they contact us once they had completed the training. Eight participants responded. Five of them went through self-guided training. Training resources provided by Intellectus included a monthly qualitative session, weekly overview sessions, an FAQ list, and a resource video library. There were no requirements on which resources participants used for their training. We only required that they use their own data for exploring the tool. Once they felt that they had sufficiently learned the software, they contacted us for an interview. The principal investigator, Dr. Nubla-Kung, scheduled and conducted Zoom interviews averaging 35 minutes.

Data Analysis

For the questionnaire data, the process of analysis began with exporting the data from the SurveyMonkey platform into an Excel sheet. The data were analyzed using descriptive analysis. Specifically, means and percentages were calculated for the questionnaire items.

For the interview data, transcripts were created by the Zoom platform and were checked for accuracy. These transcripts were uploaded into Intellectus 2.0, which provided categories and themes. Additionally, to increase validity, Dr. Rankin analyzed the transcripts using thematic analysis, starting with line-by-line coding, then focused codes, categories, and finally themes. Then all the researchers came together to review both sets of codes and themes. This collaboration allowed for investigator triangulation, which involves the participation of two or more researchers in the same study to provide multiple observations and conclusions (Flick, 2022). This type of triangulation can bring both confirmation of findings and different perspectives, adding breadth to the phenomenon of interest (Carter et al., 2014; Flick, 2022). Ultimately, we found that while the AI did not do coding well, some of the themes it developed were insightful and allowed us to reflect on our own observations.

Findings

Participant Characteristics

The participants in the first phase of data collection numbered 23 HEI faculty members. This sample spanned a range of higher education roles (lecturers, professors (assistant and full), deans, program directors, dissertation chairs, and advisors) representing 12 public and five private higher education institutions. Questionnaire participants ranged in experience in their HEI roles from less than a year to more than 10 years. In terms of research output, most published their work in journals an average of one to two times per year and presented at conferences at the same rate.

From the questionnaire data, we found that most of our participants had experience with AI tools in general, ranging from “little” (43%) to “moderate” (48%). In terms of their research work, 5% used AI tools “a lot”, 18% used them a “moderate amount”, 64% “a little”, and 14% “none at all”. For those who used AI tools in their research work, an equal percentage (64%) used them for editing and finding citations, whereas 41% and 36% of them used AI tools for writing and data analysis, respectively. Other reported uses for AI tools included preparing proposals, writing code, planning, paraphrasing, and cross-checking. Regarding AI use in the future, a majority (89%) of participants believed that AI tools could help them conduct more research, especially to circumvent the “lack of time” barrier.

The five interview participants represented two public R1 and one private RCU institutions, where they served mainly as program directors (1) and taught as adjuncts (1), assistants (1), and full professors (2). These participants were highly experienced: two had been in their role for 6-10 years, and three for more than 10 years. In terms of research output, it followed the same pattern as those in the full sample, publishing and presenting at conferences one to two times per year on average. None of them had experience with Intellectus 2.0 before.

Interview Findings

After analyzing the data collected in the interviews, two themes emerged: The Experience of Learning and Using an AI Program for Qualitative Research and Considering the Future Use of AI Programs for Qualitative Research (See Table 1).

Table 1.Themes, Categories, and Focused Codes
Theme Categories Focused Codes
The Experience of Learning and Using an AI Program for Qualitative Research AI Adoption Through Experiential Understanding Self-directed Learning Preference
Complementary but Imperfect Assistant AI as a Complementary Research Tool
AI as Imperfect Research Tool
Usability Challenges and Benefits
Considering the Future Use of AI Programs for Qualitative Research Future AI Use in Research Selective Use of AI Programs
Working Faster
Ethics of AI in Research Need to Know Your Data
Copy and Pasting is Unethical

Theme 1: The Experience of Learning and Using an AI Program for Qualitative Research

The following categories were used to develop this theme: AI Adoption Through Experiential Understanding, Complementary by Imperfect Assistant, and Balancing Efficiency and Familiarity.

AI Adoption through Experiential Understanding. All but one participant completed the full training, which consisted of a series of videos demonstrating how to use the AI program. The four who completed it found it helpful, enabling them to use all the program’s features. John stated, “it answered my basic questions. It did more than I thought it could do, based on, you know, my own just dabbling. I got more out of it. I saw that it could do more.” The participant who didn’t complete the training expressed a desire to learn by using the system. Clay said, “The interface was very intuitive, so I loaded up some data I had from a prior study, and it was super easy to use it… the only how-to or training I did was how to set up my account.” She went on to say:

[Later] I wanted to go and look at the training videos. I could not find them. They’re very difficult to find. So, I went to look for them, and… it took me too long to find them, so I’m like, I’ll just see if I can figure this out on my own.

Like Clay, Claire wanted to jump straight in and use it, but she did go through the training, saying, “the experience was fine. I had to go through some steps and boxes, but honestly, I just wanted to get my hands dirty with just the data set I had already done.”

Complementary but Imperfect Assistant. The participants viewed the AI program as helpful as a limited assistant rather than as a replacement for human analysis. Clay said it best, “I felt like there’s definitely a place for it, but I definitely [think], at this point, anyway, it’s not like it could replace a human.” John had a more detailed breakdown of where the software didn’t do enough. He said,

I was trying to look at organizational resiliency specifically, in terms of COVID, and so I had done a bunch of COVID interviews, and I really was interested in if it [the AI] could, pull out various examples of organizational resiliency. And I think it did an okay job… it gave me a lot of stuff that was about resiliency. The problem was more about the project itself, and I couldn’t differentiate it from COVID. There’s no, at least that I could see, no way to instruct it like I could with ChatGTP, like, no, no, no, don’t… only do it with COVID-related stuff.

The inability to be more specific through a command meant that John would have to do more manual sorting of the data that the software included in its analysis. In contrast to John, Claire found the software faster than if she had done it herself. She said,

It took a lot less time than if I was doing it myself. The way it came up with the themes wasn’t my wording, or the way I would word it, or communicate it… so if I would have used it and come up with those themes. I probably would have reworded it.

Gary felt similar to Claire and stated he would probably use this software to help his faculty do more research, which is further discussed in the next theme.

Theme 2: Considering the Future Use of AI Programs for Qualitative Research

While the first part of the interview focused on the experience of using the program, the second half focused on the participants’ reflections on the implications of using the software in the future. This theme came from that part of the interview and had two categories: Future AI Use in Research and Ethics of AI in Research.

Future AI Use in Research. Each participant felt they would use the software or other AI software in the future when analyzing research. Gary thought that it would be a good tool to help faculty publish as required by his accrediting body. When asked about how it would help, he said,

Coding takes a long time…freaking a long, long time. It was an Excel spreadsheet with about 12 sheets that kept getting smaller and smaller…it was a lot of different views of how words were analyzed. It’s about six months, and then the question is, how long [did the software take]… It kind of… it took a lot of the… the crunching out of it.

Jason felt he might personally continue to use the software if he had more training. He said,

I think if I could learn to use it better, it would help me work faster, because I have no qualms with… I mean, in the end, I don’t really have any qualms with… [software]…helping me create my categories. If I already know what my categories should be, more or less…And it doesn’t create a lot of noise, and it just helps me get to that place where then I can do the fine-tuning.

Clarie suggested a similar idea, stating she would probably use it to supplement her own analysis.

Ethics of AI in Research. While discussing future use, the faculty also considered some ethical implications of the AI software, including its use by students. All the participants felt that using AI in research analysis was ethical if the researcher knew their data. Jason summed it up best,

I’m not sure if it rises to the level of being unethical. But you have to know what’s in your data, I feel like, but, like I said, it’s so seductive. Once you know it can do this, you just throw your data in there and say, go at it, and then tell me what you found. I think without even having to read the interview.

The other participants felt the same. They also felt that students might be tempted in this way.

The biggest concern expressed about student use, especially at the doctoral level, is that students will not learn the skills they need to ensure that when they do use AI, they will know what it did was correct. When discussing students using the software, Gary said,

Well, I want students to use it [and] I want students to manually code. I also want them to learn how to use AI. I would prefer… you can’t go as far as saying I demand…that they manually code and then use the AI to validate. Determine if there’s any differences, and that helps them to. Critically analyze the AI today, and what to look for in the future.

Claire brought up another ethical concern about disclosure,

from a business perspective, because I’m a business person. We used to do financial statement analysis and banking. And we use tools to spit out an analysis. And I think if someone just had it spit it out, and that was their analysis. They should disclose that to whoever they’re providing the analysis to. They should say, hey, I use this, and this is the analysis it gave me.

The disclosure of AI use was a common concern among all participants.

Discussion

The findings presented in the preceding section illuminate the complex landscape of faculty perceptions and experiences with AI qualitative analytic tools in higher education institutions. These results, derived from in-depth exploration of participants’ knowledge, expectations, and practical experiences, require careful interpretation within the broader context of technology adoption in academic research. The following sections are organized to address our three research questions, as well as the study’s limitations, recommendations for future research, and implications for practice and policy

What knowledge, if any, do HEI faculty members have regarding the use of AI qualitative analytic tools to conduct research?

Questionnaire data showed that our participants’ knowledge and experience with AI tools in their research were little to moderate. This aligns with previous survey data showing that 40% of global faculty were beginning their AI literacy journey (Digital Education Council, 2024). All participants in the study indicated they had no experience with qualitative AI software; however, they were familiar with AI chat tools such as ChatGPT, Gemini, and Claude. One interview participant, Jason, indicated that they had used the software NVivo many years earlier to analyze data, but that the software didn’t have an AI component.

After training on the software, how do HEI faculty members believe it will impact their research?

The theme The Experience of Learning and Using an AI Program for Qualitative Research answers this research question by revealing that the faculty members’ knowledge of AI qualitative analytic tools is predominantly developed through hands-on experimentation rather than formal training. It demonstrates that when the faculty engage with these tools, they tend to build their understanding through direct application and self-guided exploration, leveraging their existing qualitative research expertise. The theme suggests that faculty knowledge is characterized by an experiential learning approach, in which they develop competency through practical application and autonomous learning, seeking specific guidance only when necessary. This indicates that the faculty’s knowledge of AI tools is often intuitive and practice-based rather than theoretical or comprehensively structured.

The participants’ perception of AI as a valuable but limited tool could be due to the hands-on approach described above. However, the data shows that the faculty still believe AI software could enhance research efficiency for routine tasks while requiring significant human oversight and expertise. This balanced perspective suggests faculty are developing a nuanced understanding of how to integrate AI tools into their qualitative research workflows without compromising methodological rigor or analytical depth, recognizing that while AI can support and expedite specific processes, human researchers must remain the primary analytical agents who bring theoretical sophistication and interpretive judgment.

In what areas do the trained HEI faculty members believe the software would be helpful to them?

The training and use of the software demonstrated that the participants recognized the potential for AI to streamline workflows and increase research productivity. They also acknowledged the need to maintain human oversight of AI-generated outputs. Their anticipated adoption appears contingent on how well AI software aligns with their existing technological competencies and whether the efficiency gains clearly outweigh the learning curve, suggesting that impact will vary based on individual technological adaptability and the perceived value-to-effort ratio.

The disconnect between AI-generated outputs and researchers’ analytical intentions raises more profound questions about the compatibility between algorithmic processing and interpretive inquiry. While existing literature has explored AI’s potential to democratize qualitative analysis, this study suggests that such democratization may come at the cost of conceptual precision. The implications for practice are substantial, indicating that successful AI integration requires not merely technical training but enhanced emphasis on foundational qualitative concepts. For research education, this finding suggests the need to redesign the curriculum to strengthen conceptual understanding before introducing technological tools. The study contributes to theoretical knowledge by revealing how AI tools may create new forms of analytical dependency that require careful pedagogical consideration.

Positioning AI as an assistant rather than a replacement reflects a mature understanding of qualitative inquiry’s interpretive demands and suggests that concerns about AI replacing human researchers may be overstated. Conceptually, this finding illuminates how professional researchers navigate technological adoption through pragmatic evaluation rather than wholesale acceptance or rejection. The tension between efficiency gains and the preservation of critical thinking reveals more profound questions about the nature of qualitative expertise and how it might be augmented rather than automated.

This perspective aligns with broader discussions in qualitative methodology about maintaining analytical rigor while embracing technological innovation. The implications for practice suggest that effective AI integration requires explicit frameworks for human oversight and quality control. For pedagogical development, this finding indicates the need for training approaches that emphasize complementary rather than competitive relationships between human and artificial intelligence. The study’s contribution lies in demonstrating how experienced researchers develop sophisticated frameworks for technology evaluation that balance pragmatic benefits with methodological integrity, offering a model for responsible AI adoption in qualitative research.

The dynamics of AI adoption revealed through this theme illuminate broader patterns of technology integration within academic environments, addressing research questions about perceived impact through the lens of organizational and individual change processes. The tension between efficiency gains and system familiarity reflects classic innovation adoption challenges, where perceived usefulness must overcome established workflow preferences and technological comfort zones.

This finding reveals that AI adoption in research contexts is not merely a technical decision but involves complex negotiations between professional identity, methodological tradition, and technological possibility. The significance of prior technological experience as a moderating factor suggests that AI integration success depends heavily on users’ existing digital competencies and attitudes toward technological change. This finding challenges assumptions about uniform patterns of AI adoption and highlights the importance of individualized approaches to technology integration. The relationship between ease of use and adoption willingness aligns with established technology acceptance frameworks while revealing unique considerations for research tools that require both technical proficiency and methodological understanding.

For practice, this suggests that successful AI implementation requires attention to user experience design and gradual integration strategies that build on existing competencies. The implications for institutional support include the need for differentiated training approaches that account for varying levels of technological comfort and experience. This study contributes to the understanding of academic technology adoption by revealing how research-specific considerations shape general technology acceptance patterns, particularly the importance of maintaining methodological integrity while pursuing efficiency gains.

Limitations and Recommendations for Future Research

We were limited in the number of participants we could recruit for the study because of the number of individual subscriptions (25) we had for Intellectus 2.0. We expected all those who completed the initial phase of data collection to train on the AI tool and be interviewed. However, only a fraction of questionnaire participants opted for the second phase. The small number of participants for the second phase of data collection was a surprise especially with the incentive of gaining free access to an AI-driven research tool for six months. Given that a lack of time was considered a barrier to research output, it’s surprising that participants didn’t choose to learn about a tool that may save them time on data analysis, a famously time-consuming part of the research workflow for qualitative researchers. A possible explanation may come from Shata’s (2025) finding that faculty were reluctant to adopt AI tools partly due to anxiety from a lack of formal training in such tools. Perhaps the self-paced, online, and informal nature of the training on Intellectus 2.0 didn’t provide enough structure and would require too much initiative and time to complete. It would be interesting to discover the reasons of those who opted out of the interview in future research.

Regardless, while generalizability was not the purpose in this exploratory study, had we known the disparity in the number of participants between the first and second phases of data collection, we would have recruited a much larger number of participants for the questionnaire, thereby strengthening the trustworthiness of the data. If we replicate this study in the future, we will recruit more participants.

Policy Recommendations

Based on the research conducted here, along with our own experience using the qualitative AI software, we recommend that leaders in higher education institutions consider adopting policies on the use of qualitative AI tools to conduct research. While qualitative analysis tools like NVivo have existed for years, AI is going further by taking out all aspects of the human experience. The training for the AI tool Intellectus 2.0 emphasizes the need for the researcher to review all data and has tools to allow the researcher to examine how the themes were created, including going back to the original transcript data. We highly agree with that advice as we found that the AI coding didn’t always capture the participant’s meaning in their use of words or phrases. This aligns with conclusions by other researchers, who caution about AI missing nuanced information (van Dinter et al., 2021), and the need for human interpretation for meaning making (Marshall & Wallace, 2019). We believe that, while the AI program is a great time-saver and can provide a different perspective, nothing replaces the researcher immersing themselves in the data. Qualitative research is ultimately a human experience understood and interpreted by humans. Therefore, we recommend that higher education institutions create policies and expectations that faculty must disclose which AI tool they used, how the tool was used, and how the output was verified by the researcher. This will create transparency and accountability.

Conclusion

The landscape of academic research in higher education is undergoing a transformation driven by rapid technological advancement and continued productivity pressures. Faculty members face pressure to publish scholarly work while simultaneously managing teaching responsibilities, administrative duties, and service commitments. The limited literature on the intersection of publication expectations and emerging artificial intelligence (AI) technologies presented a clear opportunity to fill that research gap. Our findings suggest that while faculty members appreciate the efficiency gains AI tools can provide to their research workflows, they also harbor healthy skepticism about the outputs of these tools. All of those we interviewed said that researchers must have a fundamental understanding of research processes to analyze the accuracy and quality of what AI tools generate. Overall, this study contributes to the emerging yet still limited literature on HEI faculty pressures and their perceptions and use of AI-assisted qualitative tools.

Accepted: March 09, 2026 EDT

References

Ball, P. (2023). Is AI leading to a reproducibility crisis in science? Nature, 64, 22–25. https:/​/​doi.org/​10.1038/​d41586-023-03817-6
Google Scholar
Benefo, E. O., Tingler, A., White, M., Cover, J., Torres, L., Broussard, C., Shirmohammadi, A., Pradhan, A. K., Patra, D., Tingler, A., White, M., & Broussard, C. (2022). Ethical, legal, social, and economic (ELSE) implications of artificial intelligence at a global level: A scientometrics approach. AI Ethics, 2(4), 667–682. https:/​/​doi.org/​10.1007/​S43681-021-00124-6
Google Scholar
Bolaños, F., Salatino, A., Osborne, F., & Motta, E. (2024). Artificial intelligence for literature reviews: opportunities and challenges. Artificial Intelligence Review, 57, 259. https:/​/​doi.org/​10.1007/​s10462-024-10902-3
Google Scholar
Brown, R., Sillence, E., & Branley-Bell, D. (2025). AcademAI: Investigating AI usage, attitudes, and literacy in higher education and research. Journal of Educational Technology Systems, 54(1), 6–33. https:/​/​doi.org/​10.1177/​00472395251347304
Google Scholar
Bryda, G., & Sadowski, D. (2024). From words to themes: AI-powered qualitative data coding and analysis. In J. Ribeiro, C. Brandão, M. Ntsobi, J. Kasperiuniene, & A. P. Costa (Eds.), Computer supported qualitative research. Springer. https:/​/​doi.org/​10.1007/​978-3-031-65735-1_19
Google Scholar
Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41(5), 545–547. https:/​/​doi.org/​10.1188/​14.ONF.545-547
Google Scholar
De Rond, M., & Miller, A. N. (2005). Publish or perish: Bane or boon of academic life? Journal of Management Inquiry, 14(4), 321–329. https:/​/​doi.org/​10.1177/​1056492605276850
Google Scholar
Digital Education Council. (2024). Digital Education Council global AI student survey: AI or not AI: What students want. DEC. https:/​/​www.digitaleducationcouncil.com/​post/​digital-education-council-global-ai-student-survey-2024
Fanelli, D. (2010). Do pressures of publishing increase scientists’ bias? An empirical support for US States data. PLOS One. https:/​/​doi.org/​10.1371/​journal.pone.0010271
Google Scholar
Flick, U. (2022). Revitalising triangulation for designing multi-perspective qualitative research. In U. Flick (Ed.), The Sage handbook of qualitative research design (Vol. 2, pp. 652–664). SAGE Publications Ltd.
Google Scholar
Garcia, M. B. (2024). Using AI tools in writing peer review reports: Should academic journals embrace the use of ChatGPT? Annals of Biomedical Engineering, 52, 139–140. https:/​/​doi.org/​10.1007/​s10439-023-03299-7
Google Scholar
Grassini, S. (2023). Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Education Sciences, 13(692), 1–13. https:/​/​doi.org/​10.3390/​educsci3070692
Google Scholar
Hajkowicz, S., Sanderson, C., Karimi, S., Bratanova, A., & Naughtin, C. (2023). Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960-2021. In arXiv. https:/​/​doi.org/​10.1016/​j.techsoc.2023.102260
Heaven, W. D. (2022, December 23). What’s next for AI? MIT Technology Review. https:/​/​www.technologyreview.com/​2022/​12/​23/​1065852/​whats-next-for-ai/​
Hourneaux, F., Hamza, K. M., & Cordeiro, R. A. (2024). Editorial: The “publish and perish” phenomenon: How journals can be affected by it and survive. RAUSP Management Journal, 59(3), 206–211. https:/​/​doi.org/​10.1108/​RAUSP-07-2024-280/​full/​html
Google Scholar
Intellectus Qualitative. (2026). Intellectus Qualitative (Version 2.0) [Computer software]. Intellectus360. https:/​/​intellectus360.com/​intellectus-qualitative.html
Karjus, A. (2025). Machine-assisted quantitizing designs: Augmenting humanities and social sciences with artificial intelligence. Humanities and Social Sciences Communications, 12, 277. https:/​/​doi.org/​10.1057/​s41599-025-04503-w
Google Scholar
Kiai, A. (2019). To protect credibility in science, banish “publish or perish.” Nature Human Behavior, 3, 1017–1018. https:/​/​doi.org/​10.1038/​s41562-019-0741-0
Google Scholar
Marshall, I., & Wallace, B. (2019). Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic Review, 8, 163. https:/​/​doi.org/​10.1186/​s13643-019-1074-9
Google Scholar
Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation. John Wiley & Sons, Incorporated.
Google Scholar
Mohammadi, E., Thelwall, M., Cai, Y., Collier, T., Tahamtan, I., & Eftekhar, A. (2026). Is generative AI reshaping academic practices worldwide? A survey of adoption, benefits, and concerns. Information Processing & Management, 63(1), 1–13. https:/​/​doi.org/​10.1016/​j.ipm.2025.104350
Google Scholar
Muborak, Q. (2024). The impact of key performance indicators on faculty research in higher education. International Journal for Multidisciplinary Research, 6(5), 1–14. https:/​/​www.ijfmr.com/​papers/​2024/​5/​28222.pdf
Google Scholar
Perkins, M. (2023). Academic integrity considerations of AI large language models in the postpandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2), Article 7. https:/​/​doi.org/​10.53761/​1.20.02.07
Google Scholar
Perkins, M., & Roe, J. (2024). Generative AI tools in academic research: Applications and implications for qualitative and quantitative research methodologies. arXiv. https:/​/​arxiv.org/​pdf/​2408.06872
Pfleegor, A. G., Katz, M., & Bowers, M. T. (2019). Publish, perish, or salami slice? Authorship ethics in an emerging field. Journal of Business Ethics, 156, 189–208. https:/​/​doi.org/​10.1007/​s10551-017-3578-3
Google Scholar
Rawat, S., & Meena, S. (2014). Publish or perish: Where are we heading? Journal of Research in Medical Sciences, 19(2), 87–89. https:/​/​pubmed.ncbi.nlm.nih.gov/​24778659/​
Google Scholar
Shata, A. (2025). “Opting out of AI”: Exploring perceptions, reasons, and concerns behind faculty resistance to generative AI. Frontiers in Communication, 10, 1614804. https:/​/​doi.org/​10.3389/​fcomm.2025.1614804/​full
Google Scholar
Spathopoulou, F., Pitychoutis, K. M., & Papakonstantinidis, S. (2025). AI and higher education: Understanding faculty roles in teaching, research, and administration. Contemporary Educational Technology, 17(4), ep600. https:/​/​doi.org/​10.30935/​cedtech/​17406
Google Scholar
Tijdink, J. K., Verbeke, R., & Smulders, Y. M. (2014). Publication pressure and scientific misconduct in medical scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64–71. https:/​/​doi.org/​10.1177/​1556264614552421
Google Scholar
Trung, N. T. (2025). Artificial intelligence in higher education: A systematic review of impacts, barriers, and emerging trends. IOSR Journal of Research & Method in Evaluation, 15(3), 53–61. https:/​/​www.iosrjournals.org/​iosr-jrme/​papers/​Vol-15%20Issue-3/​Ser-1/​H1503015361.pdf
Google Scholar
van Dalen, H. P. (2021). How the publish-or-perish principle divides a science: The case of economists. Scientometrics, 126, 1675–1694. https:/​/​doi.org/​10.1007/​s11192-020-03786-x
Google Scholar
van Dinter, R., Tekinerdogan, B., & Catal, C. (2021). Automation of systematic literature reviews: A systematic literature review. Information and Software Technology, 136, 106589. https:/​/​doi.org/​10.1016/​j.infsof.2021.106589
Google Scholar
Waaijer, C. J., Teelken, C., Wouters, P., & Weijden, I. V. (2018). Competition in science: Links between publication pressure, grant pressure, and the academic job market. Higher Education Policy, 31, 225–243. https:/​/​doi.org/​10.1057/​s41307-017-0051-y
Google Scholar
Yang, R., & Wibowo, S. (2022). User trust in artificial intelligence: A comprehensive conceptual framework. Electron Markets, 32, 2053–2077. https:/​/​doi.org/​10.1007/​s12525-022-00592-6
Google Scholar

Appendices

Appendix A: Questionnaire

Study Eligibility

  1. Are you a faculty member or instructor at a higher education institution? (MC: Y/N) If NO, direct to Thank you page.

  2. In which higher education institution(s) are you a faculty member? (short answer)

  3. What is your primary role at the higher education institution(s)? (short answer)

  4. How many qualitative studies have you conducted as a researcher? (short answer)

  1. Do you give your consent to participate in this study? (Images of and link to consent form provided) (MC: Y/N) Logic: If YES, direct to Q6. If NO, direct to Thank you page.

AI Tools Experience

  1. In general, how much experience do you have with AI tools? (MC: A great deal, A moderate amount, A little, None at all, No Response) Logic: If “None at all”, direct to Q15.

  2. In general, how comfortable are you with AI tools? (MC: Very comfortable, Somewhat comfortable, Not at all comfortable, No Response)

  3. Specifically, have you ever used an AI tool in your research work? (MC: A lot, A moderate amount, A little, None at all, No Response)

  4. In your research work, what did you use the AI tool(s) for? Check all that apply. (MC: Writing, Finding Citations, Editing, Data Analysis, No Response/Not Applicable, None of the Above; Short Answer: Other (please specify)

  5. How useful did you find the AI tool(s) in your research? (MC: Very useful, Somewhat useful, Not at all Useful, Not Applicable)

  6. Have you ever used the Intellectus tool? (MC: Y/N/No Response) Logic: If NO: Direct to Q15.

  7. If yes, how? Please describe. (short answer)

  8. Have you used AI tools outside of research work? (MC: Y/N/No Response) Logic: If NO, direct to Q15.

  9. If you answered yes in the previous questions, what AI tools have you used and how? (Comment box)

The Role of Research

  1. What is the average number of journal articles per year do you author or co-author? (short answer)

  2. What is the average number of times a year you have presented your research at a professional conference? (short answer)

  3. In what other ways have you shared your research in the past five years? (Comment box)

  4. In your opinion, what are the reasons why some HEI faculty members do not conduct original research? (check all that apply). (Checkboxes: Lack of Time, Lack of Money, Lack of Inspiration, Lack of Motivation, No Response; Short answer: Other)

  5. Do you believe that AI tools can help faculty members to conduct more original research on average? (MC: Y/N/No response)

  6. Please explain your response (to Question 19). (Comment Box)

Demographic Information

  1. How long have you been a university faculty member?

Contact Information

  1. Please include your name and email. We will use it to contact for enrollment in the Intellectus 2.0 Training module and to answer any questions you may have about this study. (Comment box)

Appendix B: Semi-Structured Interview Protocol

The following questions were asked during the semi-structured interview with each participant.

  1. Tell me about your experience with Intellectus 2.0.

    a. Let’s talk about the training experience.

        i. Strengths?

        ii. Ways to improve?

    b. Let’s talk about the user experience with your own data

        i. Improvement over how you used to analyze data

        ii. Worse than how you used to do it

    c. Anything else about Intellectus?

  2. What are your plans for conducting research in the future?

  3. Does the training on and use of this tool affect your plans for conducting research in the future?

  4. Does the training on and use of this AI tool affect your opinion about AI-assisted tools in general? Or specifically?

  5. Do you believe there are ethical issues that need to be considered when using AI-assisted tools in research?

    a. What about Intellectus specifically?

  6. Would you use Intellectus or another AI-assisted tool for data analysis in the future?

    a. Why? or Why not?

  7. Do you believe that Intellectus or other AI-assisted tools help with the stress faculty members feel from pressure of the publish-or-perish phenomenon?

  8. You mentioned that ______[1] are barriers HEI faculty face preventing them from conducting more research. Do you believed that these barriers are helped by AI tools?


  1. This was filled out by data from the questionnaire.