Generative AI… some wonderings.
It may seem somewhat counterintuitive to begin saying what this is not. In the context of this topic and the sheer speed of change in this field, it would be presumptuous to present me as some expert - it may be more accurate to say I am more like an intern with a healthy dose of curiosity and grounded experience.
Anyway, getting back to what I am not, I have no qualifications in AI, Computer Science or related fields. I present my experiences to date and wonder about things we must consider in the near and distant future.
AI, generative AI, is a thing. To conclude that it hasn’t and won’t significantly impact our work and lives from now and forever is to adopt a head in the sand mentality. I say this from the outset to lay my cards on the table.
Much has been and continues to be made of how wrong, incorrect or inaccurate GenAI is. This is true. I doubt there has ever been a perfect technology created. In the same breath, it is useful to consider that the current GenAI is the worst it will ever be. The very nature of GenAI is that it is constantly learning…and fast.
My Emerging AI Principles…
When there is a large dose of uncertainty, it seems that the best way to navigate would be to establish some guiding principles—some short statements that can be used as reference points to make decisions in an uncertain context, this context being generative AI.
Even though principles are usually enduring truths, unwavering markers, I am calling these emerging principles. I am aware this is somewhat ironic given this. However, we are in the very early days of AI use in everyday life. I am giving myself permission to write something, but also change my mind when I know more.
Principle One:- Caution should be applied when predicting
This is just a good general rule, anyway. No human knows the future; at best, any human can make an educated prediction. The closer to the present, the more likely the prediction will be accurate.
When thinking about AI specifically, we have few reference points for what it may mean for our future, our work, our families, our communities, and our societies. In reality, no human can know this with anything we might remotely call certainty.
Therefore, we should apply caution and try not to predict the future but let that emerge as we play using AI.
Principle Two:- Approaches to using AI should be open and flexible
Linked to the previous principle. We don’t know what the useful uses of GenAI are or will be. This is the exciting and apprehensive part of the experience. As we explore its application to our work and life we should be open and flexible to the possibilities. That’s how we learn and progress.
The counterfactual, being closed and rigid, would potentially cause us to miss an opportunity.
Principle Three:- Pace should be governed
If there is anything certain about GenAI, it is the speed at which things are developing. By way of a quick example, Netflix took 3.5 years to reach 1 million users. Chat GPT got there in 5 days. (Ref Statista 2023). The risk is as general users and learners of GenAI we move at the same pace. When we as individuals or organisations move too fast with new things, we make mistakes, cut corners and generally increase the risk. When that increase in risk is unnecessary.
“Be the human in the loop”
So we don’t need to move fast; we just need to move. Walk towards it to paraphrase the All Blacks mantra.
Principle Four:- Be the human in the loop.
As I understand it, this is derived from an old computer science term coined in the 1960s. The origin is that human influence should be present in the design loop when software is designed.
In a GenAI context, this means don’t turn off your critical thinking skills when you ask the AI to carry out a task. There is always a human involved; act as the critic of the AI’s work and effort. Another way to think about it is to think of the AI as your personal assistant, your unpaid colleague, assisting you with your work, not doing it. It is still your work.
Questions
“Are there age groups that should not have access to AI?”
These are a small number of questions floating around in my head. As you can see, many of them have a learning, education or teaching flavour. I list them here in no particular order, using no particular criteria.
What does AI mean for teaching?
How will AI change the approach to traditional schooling?
Does AI mean the whole curriculum priority order is turned on its head? Is reading and writing no longer number one, is it now critical thinking?
Does anyone actually still own intellectual property?
How do we assess a student’s learning in the age of AI?
What is the correct way to reference the use of AI?
Why is it ok to use AI in your work but not your learning?
What can we look at historically that might guide our way forward?
Is there an element of teaching that is untouchable by AI?
Should we guard some areas of education from AI influence?
Are there age groups that should not have access to AI?
How do we explicitly teach the use of AI?
How do we get the workforce (teaching) to adjust at a reasonable pace in a workforce that still struggles with using some long-standing technologies?
How do we provide a degree of certainty in an uncertain world?
How do we make AI safe or check it is safe for use with students?
Factoring in AI, does the hierarchy of jobs change?
Can AI genuinely, deeply, and authentically replicate humanness?
Does the emergence of AI and the relative ease of accessing and synthesising knowledge now mean we need to define what foundational knowledge our kids need to know?
How do AI and technology like Apple’s Vision Pro combine to impact schooling?
If we accept that AI will not elimate the role of the teacher then what becomes the most critical components of teaching practice, does content knowledge take precedent over formative assessment for example.
If we are the face this technology, in other words walk towards it, rather than ignore… how do we do that in a practical sense?
References
I have used these references in this article to grow my understanding of Generative AI.