To listen to the audio version: https://tinyurl.com/bdcwvmsh
As a child, when I realised that I could get kicked out of classes that I had no interest in attending, I believed I had discovered the golden ticket. As I am sure you have experienced, our brains thrive on dopamine. When an experience leaves us with an unpleasant outcome, we are innately inclined to avoid a recurrence of that circumstance in the future, this phenomenon is typically referred to as "avoidance learning" or "negative reinforcement". It's a basic principle of behaviourism, as established by the behavioural psychologist and TV icon Bart Simpson's nemesis, B.F. Skinner. The avoidance process manifests itself differently with each student, Bart and I were extreme examples of how we chose to handle our ADD and dyslexia respectively, but a learning difficulty is not a prerequisite for any of us to avoid negative reinforcement.
Within the classroom ecosystem, this phenomenon can take several forms. For some students, the root of avoidance lies in their shyness or fear of potential judgement from their peers, preventing them from seeking the teacher's guidance. Beyond the classroom, tasks like homework often become subjects of avoidance. This ties back into our brain's propensity towards rewarding experiences, pushing us away from those that generate discomfort or stress. Maths provides an excellent example of this dichotomy. It's a subject that tends to divide students into two distinct camps: those who enjoy it and those who dread it. In the early stages of learning basic operations like addition and subtraction, a student can be lauded as "smart" 20 times in as many minutes or conversely be self-branded as “less than” in the same timeframe. There are many studies that have been conducted over the years around this topic, but one of the most notable ones would be Mark H. Ashcraft's "Maths Anxiety: Personal, Educational, and Cognitive Consequences," published in 2002. Ashcraft notes that maths anxiety can negatively impact 'working memory,' which is critical for the student if they are to excel at a subject. In this scenario, anxiety essentially devours valuable cognitive resources that could otherwise be utilized for calculations, exacerbating the problems that the student is already facing due to avoidance learning. Young minds are innately driven by purpose and are on a constant quest for self-identity. However, existing formats and resources within the education system have precipitated systemic issues. These challenges result in millions of children feeling academically inadequate each year, a sentiment from which they often never recover. Education is intended to be the great equaliser, but for many, it simply serves as another shackle.
My hope is to help unleash AI's potential to resolve deep-rooted educational problems, especially those impacting economically disadvantaged and underperforming students, while also acknowledging concerns surrounding the protection and enhancement of academic integrity.
Through AI, we have the capacity to enhance access to quality academic support at a marginal cost, potentially reshaping the future for millions. However, due to a widespread fear of academic dishonesty, it is crucial to cultivate an environment that champions AI access for students while addressing both founded and unfounded concerns around cheating. In response, I propose a straightforward solution: we should develop a standard layer in AI employing the Fostering Academic Integrity and Rigour (FAIR) methodology. This approach aims to leverage AI's strengths to enhance academic learning while concurrently mitigating the risk of AI misuse for cheating.
The FAIR methodology is not about tricking students, or catching them out, it's about encourages students to critically analyse their responses, verify their work, and ultimately learn the correct subject matter rather than merely reproducing answers.
Academic integrity is the bedrock of any educational institution, highlighting honesty, responsibility, and fairness in scholarly activities. It is vital for maintaining educational quality, fostering an environment of trust, and nurturing ethical citizens who can contribute positively to society. Likewise, academic rigour refers to the depth and breadth of understanding that students must achieve in their academic disciplines.
However, the historical resource and technological limitations in academia often foster an environment where underperforming and economically disadvantaged students are too easily left behind. Despite their potential, these students frequently underperform due to several reasons, including lack of support, inappropriate teaching methods, personal struggles resulting in children experiencing avoidance learning. Without timely academic interventions, their confidence suffers, leading to a vicious cycle of poor academic performance, decreased motivation, and low self-esteem, thereby further widening the educational attainment gap. Through AI we now not only have the power but the obligation to change this!
AI's presence in academic settings has significantly increased in recent years. On one hand, AI's benefits in education are manifold. It can facilitate tailored learning pathways to cater to individual student's strengths and weaknesses, thereby providing more targeted and effective education. AI can also handle time-consuming tasks such as grading, thereby freeing up time for educators to focus more on teaching and student interaction. However, integrating AI into academia is not without pitfalls. Many worry that it poses a challenge to academic integrity, as it can be misused for cheating and plagiarism.
The misuse of AI for cheating, although a serious concern, has created somewhat of a moral panic within the education sector. While AI can be a valuable tool for learning and comprehension, rampant misuse would undermine academic integrity. Hence, it is vital to proactively and effectively tackle this issue to alleviate institutional fears surrounding AI so that the concerns of educators do not outweigh their desire to provide students with quality AI educational support.
The FAIR (Fostering Academic Integrity and Rigour) methodology is designed to optimise the benefits of AI in academia while discouraging dishonest practices. It encourages students to critically analyse their solutions, verify their information, and develop a deep understanding of the subject matter, rather than merely reproducing answers.
The FAIR methodology enhances academic integrity by advocating an education system that values comprehension and application of knowledge. It promotes active engagement with learning materials, encouraging students to grapple with concepts and problems independently, which ultimately reduces the temptation for academic dishonesty. The idea is for Large Language Models (LLM) and providers that leverage LLMs to embed infrequent and strategically place purposeful errors or 'FAIR tags' within AI resources, making it possible to detect instances of mere reproduction rather than understanding.
The primary goal of the FAIR system is to reduce instances of cheating without depriving students of the advantages AI can provide throughout their academic journey. It aims to strike a balance, offering a framework that leverages AI's strengths while discouraging its misuse.
To date, numerous attempts have been made by both Legal Learning Models and private businesses to assist educators in identifying instances of academic dishonesty through AI. While these efforts have been valiant, they have exhibited significant limitations in consistency. The methods employed thus far are also limited to assessing creative writing, and are unable to detect cheating that may involve computation.
The study of FAIR tags' effectiveness is still in its early stages and is currently being incorporated into InstantTutor. InstantTutor, is designed to provide addional AI powered academic support outside of normal school hours. The app doe not allow for students to copy and past response, and is in the process of testing FAIR tags, withholding the final stage for computational responses, as well various in-app prompts to discourage students from cheating. However, we are actively seeking additional research participants, so please connect to find out more. The FAIR methodology heavily relies on the use of FAIR tags, deliberately embedded within AI resources, as an anti-cheating measure. Drawing from historical practices such as the inclusion of spurious words in dictionaries to detect plagiarism or the use of watermarks to ensure authenticity provide precedents for this. FAIR tags serve a similar purpose in an AI-driven academic environment. These tags would be seamlessly integrated into learning content and are designed to be easily identifiable when a student understands the topic and actively engages in the learning process.
The article you're currently reading has been written with the assistance of GPT-4. Although the FAIR methodology is particularly suitable for subjects that require computational knowledge, other approaches may be necessary for the social sciences, which require less computational understanding. For instance, if FAIR tags were embedded in the content provided by GPT for this article, it's likely that I would identify them quite easily. However, in scenarios like these, it wouldn't be difficult to feed my written work to an LLM. The model could then prompt the teacher with pertinent questions to rapidly assess my comprehension of the covered material. The questions could be designed in such a way as to evaluate the connections between key thoughts and references, thereby determining whether learning is occurring, and measuring the depth and breadth of my understanding of the required materials.
Another critical component required for the FAIR methodology is a system alert database. For this to work, it calls for schools and providers to come together to create a ledger or multiple ledgers accessible by schools. When student work is submitted, it can be run through this system, which searches for FAIR tags. The presence of these tags in a student's work would suggest that they have reproduced the content without fully understanding or engaging with it, as a student who has grasped the material would likely have spotted and corrected these intentional errors.
By addressing a fundamental issue in AI-based education—preventing misuse while promoting its advantages—the FAIR methodology offers a viable solution to a pressing problem that is leading some educational institutions to even ban the technology, thereby discarding the benefits it could provide to students. As we continue to advance in technology, the need for ethical AI in education becomes paramount, and the FAIR methodology sets the stage for this transformation.
Looking to the future, the combination of AI and the FAIR methodology has the potential to revolutionise education. Marc Andreessen, shares the benefits ahead of us as elegantly as anyone can “The future of education can be a place where every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child's side at every step of their development, helping them maximise their potential with the machine version of infinite love".
The future of academic integrity and rigour in a post-AI world looks promising indeed, but we can not do it at the cost of credibility of our educational system and without the support of our wonderful educational institutions.
If my message resonates with you, please join the participant list to get involved, or message me on LinkedIn.
Luke Deering
Keep it FAIR