“Have you tried using AI to help you with this task?” seems to be the question we have been getting a lot lately. We look around, and we see AI everywhere. Students are using AI to help them with academic assignments (whether their professors told them AI is allowed or not; Digital Education Council, 2024), the academic world is putting out more AI-related workshops than ever, and grant funding agencies are encouraging researchers to study or utilize AI in their studies.
The truth is that AI has been around us for longer than we think. In a nutshell, AI stands for Artificial Intelligence, and it is “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy” (IBM, 2024). While most people think of ChatGPT or self-driving cars when asked to give an example of AI, it is much more embedded in our world. For instance, digital assistants (Apple’s Siri), search engines (e.g., Google), social media (e.g., Facebook Meta), personalized recommendations during online shopping, rideshare apps (e.g., Uber), and direction apps (Google Maps) are all examples of AI.
In the field of education, AI can be a transformative force. It can be a tool that greatly enhances the efficiency and quality of teaching and/or scholarship. However, there is much to be learned about AI and its impact. We asked, using questions developed with ChatGPT, two experts in this area to provide context and suggestions for how to utilize AI in scholarship.
Adam Lockwood (AL) is an assistant professor at Kent State University. His work focuses on the intersection of AI and school psychology through scholarship and practitioner experiences. He also used AI on his draft answering our questions, you can see his process here.
Rochelle Butler (RB) is a Research Consultant in the Office of Innovative Technology (OIT) at the University of Tennessee-Knoxville. Her work focuses on supporting researchers to conduct rigorous and ethical studies that advance knowledge and practice in various disciplines.
What are the advantages and disadvantages of using AI in scholarship?
AL: In terms of advantages, AI saves time, democratizes research access, and provides support for data analysis, methodological guidance, and creative brainstorming. It allows researchers to focus more on conceptual work rather than tedious tasks. Tools like GPT-4 and Claude are particularly effective as research assistants, providing insights and automating routine processes. I use both. I also love NotebookLM for organizing research as it is grounded. As for disadvantages, an over-reliance on AI risks deskilling (loss of skills) for researchers by reducing direct engagement with foundational research activities, which can lead to cognitive atrophy. Ethical concerns regarding data privacy and appropriate data use also need careful consideration. AI also hallucinates, so you must double check it.
RB: AI can help researchers process large amounts of text data quickly. Because it processes large amounts of data quickly and with seemingly relative ease, researchers may be able to identify patterns and generate insights that would not have been possible without it. However, speed can come at the expense of critical analysis, methodological rigor, and thoughtful interpretation. Additionally, AI may be able to introduce some objectivity into the data analysis process by removing some level of human intervention when identifying patterns in data. But, it lacks reflective capability which potentially limits the depth and adaptability of the analysis. Qualitative researchers often engage in reflexivity – (a process of reflecting on their own influence on the research and data interpretation). AI systems rely on training data which can be biased. The data used to train AI models might not be representative of the diversity within the population that a scholar seeks to understand. While AI is efficient at processing textual data, it may overlook the meaning contained in non-verbal cues, tone and body language of interviews. Finally, there is a risk that AI-generated results could be seen as more “objective” or “truthful” simply because they are produced by a machine, leading to an uncritical acceptance of AI output without sufficient scrutiny.
How is AI transforming the way research is conducted, particularly in fields that traditionally relied on human intuition and manual analysis?
AL: AI is revolutionizing research by democratizing access to advanced analytical tools, especially for fields like school psychology that have traditionally relied on human-intensive methods. It will provide a means for scholars, regardless of their research background, to conduct sophisticated analyses and broaden the scope of their research. This transformation enables new insights and augments traditional methods with data-driven approaches. I also use it to provide ideas for research or to improve a project, idea, etc. On the downside this surge in new research will also lead to a lot of spurious findings.
RB: AI is introducing new levels of speed and scalability. Even though AI allows researchers to analyze large datasets – large datasets without specific and relevant research questions may introduce noise that complicates analysis and potentially skews results. Analyzing more data doesn’t necessarily lead to more meaningful insights. AI allows researchers to collect and analyze data in virtually real time which means that scholars may be able to detect patterns and trends within their data early in the data collection process. This may allow researchers to dynamically respond to emerging trends or findings as they arise. Additionally, AI offers opportunities for scholars to learn new methods and tools with AI-driven tutorials. This may enhance the scholar’s analysis of their data and improve findings.
In what ways can AI expedite or enhance the discovery process in academic research?
AL: AI expedites research by acting as a collaborator that can rapidly analyze large datasets, provide methodological suggestions, and propose new research angles. It enhances the discovery process by identifying patterns that may not be readily apparent to human researchers and by offering rapid literature reviews or even initial drafts for academic writing – I published a little paper on this a while ago and the technology is much better now – it wasn’t horrible then.
RB: Machine learning algorithms used with AI can create predictive models that forecast future outcomes based on historical data. This may help researchers employ proactive interventions. AI also allows researchers to capture and analyze data in virtually real time. Real-time data collection can help researchers employ immediate and responsive interventions. For example, if a study monitoring air quality detects pollution spikes, researchers could adjust their analysis to investigate underlying causes or deploy resources to affected areas faster than traditional research methods might allow. Interactive chat-bots are another way AI enhances data collection in academic research. Unlike traditional surveys, chatbots can tailor questions based on participants’ previous answers. Chat-bots can be deployed to ask different questions based on previous responses which would allow for deeper insights and potentially more relevant data.
How should scholars navigate issues of data privacy, ownership, and ethical considerations when using AI in their research?
AL: I believe strongly in open science and think that all datasets should be redacted and provided online. Sharing data publicly, such as through platforms like OSF, facilitates academic honesty and accessibility. I already do this, so my data has already been scraped by AI (it scrapes information available online). However, there are tools like Co-Pilot for Security that have Business Associate Agreements (BAAs) in place for organizations and can be used with HIPAA data. For these tools the same rules that apply to any other technology (e.g., Google Drive) apply – check with your IRB and IT folks about the rules at your organization. Honestly, we need more guidance on the topic of ethics in AI in general and in research, specifically. My biggest concern is what are these AI companies doing with our data? They will scrape and use our data as they please and we cannot even dream of some of the ways this could be harmful or abusive (think of the concerns raised by folks like Edward Snowden times 100). Even if data is 100% de-identified, we still have to worry about the mosaic effect – a term used to describe the phenomenon where seemingly innocuous or non-sensitive data sets, when combined, can reveal sensitive information using powerful technology. The idea is that individual pieces of data might not be personally identifiable or pose a privacy risk on their own, but when multiple pieces are “mosaicked” together, they create a fuller picture that can lead to the identification of private details or otherwise unintended insights. Here’s a simple analogy: imagine you have multiple jigsaw puzzle pieces from different puzzles. On their own, none of these pieces may provide meaningful information. But if you gather enough pieces and assemble them, you might be able to see the complete image they create. Similarly, the mosaic effect occurs when disparate data points are combined, leading to the revelation of patterns or information that wasn’t apparent in each dataset alone.
RB: Researchers should inform participants about what data is being collected, how AI will process it, and the purposes it serves. Consent forms should be explicit about AI’s role in data analysis and any implications. Data should be anonymized wherever possible, especially if it contains sensitive information. When AI is used to make predictive or diagnostic recommendations, researchers should ensure that it does not perpetuate or exacerbate existing social inequalities. Scholars should also describe their use of AI and make the AI model used interpretable in their writing so stakeholders understand how the results were generated. I think scholars should also use AI platforms that do not use their research data to train the existing model.
What do you envision as the future of academic scholarship in the context of rapid AI advancements?
AL: The future of academic scholarship will likely involve a rapid integration of AI tools. I strongly believe that AI is the most disruptive technology that we’ve ever seen. AI has the potential to democratize knowledge production, making sophisticated research accessible to a wider audience. However, to prevent increased inequity, it is essential to ensure that access to new technologies are not limited to well-funded institutions. I want every EdS-level school psychologist to be able to easily conduct action research/program evaluation. I think this can occur. We need to promote broader accessibility and ethical use of AI to ensure it serves as a bridge rather than a barrier. We also need to train non-PhD-level school psychologists on how to use AI to analyze data so that they can be system leaders on program evaluation.
RB: AI advancements will likely bring shifts in research expectations and methodology. As AI automates some research tasks, scholars may feel pressure to produce research at a faster pace. However, speed can come at the expense of quality and originality of academic work. The push for quick research output could lead to an intensified culture of “publish or perish”, where scholars prioritize producing numerous studies over conducting in-depth, high-quality research and robustly exploring complex research questions. In terms of methodology, traditional research often starts with a hypothesis, followed by data collection and analysis to confirm or refute it. AI, however, allows for data-driven discovery, where patterns and insights emerge without pre-defined hypotheses. This may bring a renewed emphasis on exploratory research where data exploration reveals unexpected trends or associations that researchers can further investigate.
Do you believe AI will eventually lead to entirely new research methodologies or fields of study?
AL: AI has already led to entirely new methodologies by enabling data analyses and modeling techniques that were previously unimagined. The combination of machine learning, natural language processing, and large-scale data analysis opens up new avenues for research, potentially creating fields focused on human-AI collaboration and ethical AI governance in scholarship. I believe that this trend will accelerate as the technology progresses.
RB: I do believe AI will eventually lead to new research methodologies or fields of study. AI systems are complex and understanding how AI systems make decisions could become a field of study in itself. AI systems that generate predictive models offer additional approaches to empirical research. Rather than simply analyzing existing data, AI can create simulations and forecasts that allow researchers to explore potential outcomes. This may help researchers develop interventions more proactively or refine their studies based on model predictions. AI’s ability to generate synthetic data also opens possibilities for research methodologies that don’t rely on real-world data collection. Maybe researchers will generate more findings in fields where data is scarce or ethically challenging to collect.
Could AI democratize knowledge production or does it risk increasing existing academic inequities between institutions with and without access to advanced AI technologies?
AL: AI holds the potential to democratize knowledge production (and consumption) by providing advanced tools to those who may not have traditional access. However, there is a risk that AI could widen existing inequities if its access remains limited to well-funded institutions. Ensuring accessibility to AI technologies to everyone is critical to prevent a divide and ensure that these tools bridge gaps in academic capabilities. I’m submitting a paper on the topic of AI and whether it will increase or decrease equity with a colleague, Jeffrey Brown at San Diego State, on the topic soon because I think we need to have a lot of discussion on the topic.
RB: I think there is a risk of increasing academic inequities. The expense and technical expertise required to implement AI tools are significant. Well-funded institutions may be able to conduct AI research, while smaller, less-funded institutions may not have the resources to do so. This disparity could widen the gap in student outcomes, research opportunities and academic funding. Furthermore, if larger datasets and AI-driven methodologies become the norm, certain fields or research topics that rely on smaller, more qualitative data (like humanities or some social sciences) may struggle to compete for funding and recognition, potentially narrow the diversity of academic research.
Is there a role for AI in assisting with student research projects, and how do we ensure this aids rather than replaces critical thinking?
AL: AI can be very helpful in student research by providing explanations, suggesting directions, and assisting in writing. We need to use grounded models (which ensure that AI uses verifiable data) and Retrieval-Augmented Generation (RAG) (which combines information retrieval with AI generation to provide more accurate and contextually appropriate responses) to ensure that AI outputs are based on reliable sources, minimizing the risk of misinformation. However, AI should supplement critical thinking, not replace it. Educators and mentors must guide students to use AI effectively, enhancing learning while still developing core research skills. There are academics that worry that AI induced cognitive atrophy is possible.
RB: I do think there is a role for AI assisting students with research projects. I think students should be taught to disclose how they used AI in their projects which fosters integrity and accountability. AI is a tool to enhance, not replace, individual academic contribution. Having students disclose their particular use of AI reinforces the importance of transparency in research. I also think that to foster critical thinking, students should be taught to not merely accept AI-generated summaries at face value. Instead, students should be taught to critically evaluate the information AI tools provide, question the sources, and assess how it contributes to their understanding of the topic.
In closing, Dr. Lockwood noted: While I remain cautiously optimistic, I believe AI’s role in research should be guided intentionally. We need a balanced integration that enhances research quality without diminishing our capabilities as scholars. Models like GPT-4 and others provide valuable support, but we must continually evaluate their impact on our skills and practices, advocating for responsible and thoughtful use.
Thank you to Dr. Lockwood and Ms. Butler for sharing their expertise and providing insight into utilizing AI in scholarship! What are your experiences with using AI and have you considered using it in your scholarship? Share below!