Part 3: Overview of Artificial Intelligence
This is the third of several newsletters that I have been motivated to write by the document a 27-page article on the topic of Artificial Intelligence (AI) in education AI and the Future of Learning: Expert Panel Report,(Roschelle, Lester, & Fusco, December 2020, link). The report is based on the work of 22 carefully selected experts in the field of AI in education.
The term Artificial Intelligence came into prominence as a consequence of the Dartmouth Summer Research Project on Artificial Intelligence held in 1956. Quoting from the Dartmouth workshop (Wikipedia, 2021a, link):
In the early 1950s, there were various names for the field of “thinking machines”: cybernetics, automata theory, and complex information processing. The variety of names suggests the variety of conceptual orientations.
In 1955, John McCarthy, then a young Assistant Professor of Mathematics at Dartmouth College, decided to organize a group to clarify and develop ideas about thinking machines. He picked the name ‘Artificial Intelligence’ for the new field. He chose the name partly for its neutrality; avoiding a focus on narrow automata theory.
At about the same time, the term Machine Intelligence (MI) came into use in Europe, and both AI and MI now are widely used names for the field.
Human intelligence has a long history. Researchers believe that the earliest prehumans appeared about six million years ago. These earliest prehumans had a combination of physical and mental capabilities built into their genes that allowed them to survive. About six million years of evolution led to Homo Sapiens who had the physical and mental capabilities to learn and use oral languages. A key aspect of this is that, while today’s children are born with the innate capability to learn to speak and communicate in an oral language, it requires considerable learning over a long period of time to actually develop these skills. That is, evolution provided the physical and mental capabilities for such oral language, and education is the vehicle for passing on oral communication skills from generation to generation.
We have evidence of drawings and paintings on cave walls going back more than 40,000 years, and of using marks on small stones (including some made out of clay) as a means of communication more than 10,000 years ago. The first development of written language evidently occurred about 5,500 years ago in Sumeria. I like to think of this as one of the earliest important steps in the development of AI. Written language is a very important aid to human intelligence, and we humans created it and passed on its use from generation to generation.
In brief summary, through a combination of evolution and human ingenuity, humans develop tools that can be used to enhance their intellectual capabilities. Some of these can quickly be mastered, while others take years of study and practice to meet contemporary standards. These intelligence-enhancing tools can contribute significantly to our quality of life. Their use also can decrease the quality of life for their users and/or others. Thus, AI is a change agent, and for any particular person the effects of using a specific AI-based product will lie some place on a scale of very bad to very good. We certainly see this range of effects in computer games.
Counting provides a good example. Humans and some other animals have an innate ability to learn to count. As humans developed their early civilizations, counting and simple arithmetic proved to be useful aids to representing and helping to solve a variety of problems. The tally stick was developed as an aid to counting and keeping track of some number at least 40,000 years ago (Wikipedia, 2021, link). We now have paper-and-pencil arithmetic, calculators, and computers as aids in dealing with math-related tasks.
If you want a very quick overview of AI, I recommend the short video by Ashish Bhatnagar, Artificial Intelligence Explained in 5 Minutes! (Bhatnagar, 2020, link).
The video explains AI in terms of the use of computers to carry out a number of activities that humans can do. The goal of AI is to develop systems that can function intelligently and independently. The video identifies a number of human capabilities that make use of human intelligence, and relates these to specific areas of AI research. In the list below, the names of these areas are given in parentheses.
The human capabilities mentioned in the video include:
- Speak and listen. (Speech Recognition.)
- Write and read text in one or more natural languages. Note that many people are bilingual, and that people who do simultaneous translation are displaying very high linguistic skills. (Natural Language Processing.)
- See with their eyes and process what they see. (Machine Learning.)
- Recognize a scene around them. Remember the past and integrate (parts of it) with the present. (Image Processing.)
- Understand their environment and move around fluidly. (Robotics.)
- See patterns, such as groupings of like objects. (Pattern Recognition, Neural Networks, Deep Learning.)
Humans use these capabilities to deal with problems and tasks they encounter in their everyday lives. Much of human informal and formal education focuses on learning to recognize, understand, and decide on the importance and immediacy of a problem one encounters, and then to decide on one’s own capabilities to deal with and possibly to solve the problem. Over the years, we learn either to solve or in other ways to deal effectively with many of the myriad of problems we encounter throughout the day.
Humans have many capabilities not in the bulleted list, but all of the items in the list relate to various components of AI research, development, and use. Using AI, humans are able to develop tools that enhance their physical and mental capabilities. In brief summary, here are three key ideas:
- AI can provide aids to solving problems and accomplishing tasks that we have dealt with in the past without the use of AI.
- AI can provide aids to solving problems and accomplishing tasks that we have wanted to deal with in the past, but could not handle with the then current aids to problem solving.
- AI opens up new problem areas that we have not considered in the past, and can contribute to the exploration of and possible solution of some of these problems.
All three of these are of interest to people working to improve schooling and lifelong education.
This 27-page report is based on the input from 22 experts in the field of AI in education. The panel of experts created a list of seven recommendations for research priorities (Roschelle, Lester, & Fusco, 11/16/2020, link). Quoting from the report:
- Investigate AI Designs for an Expanded Range of Learning Scenarios
- Develop AI Systems that Assist Teachers and Improve Teaching
- Intensify and Expand Research on AI for Assessment of Learning
- Accelerate Development of Human-Centered or Responsible AI
- Develop Stronger Policies for Ethics and Equity
- Inform and Involve Educational Policy Makers and Practitioners
- Strengthen the Overall AI and Education Ecosystem
The next seven sections are brief summaries/comments regarding these seven recommendations.
1. Investigate AI Designs for an Expanded Range of Learning Scenarios
Quoting from the report:
Many important opportunities, such as AI agents to support learning in open-ended science inquiry environments, social studies simulation tools, or curricula to encourage design thinking, are still under-investigated. Likewise, AI learning scenarios may support better preparation for the workplace.
Comment: The point is that we now have had well over 50 years of ongoing research and development on various forms of Computer-assisted Learning. As AI grew in its capabilities, we began to develop Highly Interactive, Intelligent Computer-assisted Learning (HIICAL) systems. Such systems lend themselves to detailed data collection and research on the impact of HIICAL on student learning. Increasingly, we will have HIICAL systems that are more effective than the various traditional methods of teaching, learning, and student evaluation in many parts of the curriculum.
2. Develop AI Systems that Assist Teachers and Improve Teaching
Quoting from the report:
Experts were aware that today’s AI systems have dashboards and other interfaces for teachers, but that these often fall short of being usable, friendly, or instrumental for teacher’s work. They fall short of the idea of augmenting the teacher’s intelligence and helping the teacher to grow, and often only make more work for teachers. … Experts called for a vision of AI in the classroom that is more centered in assisting and supporting teachers.
Comment: Being a good, effective classroom teacher is a very challenging and difficult task. Computers and AI can help teachers do a better job, and simultaneously save them considerable time they now spend on rather mundane, busy work. Success in this endeavor will help to gain the support of teachers in greater use of HIICAL.
3. Intensify and Expand Research on AI for Assessment of Learning
Quoting from the report:
Although AI already has been used in assessment of writing, science, and mathematics, much work is still needed to expand the bounds of the student learning activities that can be automatically assessed, the range of competencies that can be captured, and the breadth of assessment across settings and over time.
Comment: Assessment is an ongoing and very challenging component of a teacher’s job. This reminds me of my periodic visits to my medical doctor. A broad range of blood tests are ordered and carried out by technicians and highly automated equipment before my visit. Much of my visit is spent discussing the test results with my doctor and coming to understand the ensuing recommendations. The doctor’s time is spent on the individualization of the treatment I receive. An appropriate use of HIICAL can significantly increase the individualization of instruction in our schools and also free up teacher time to provide more individualized help to students.
4. Accelerate Development of Human-Centered or Responsible AI
Quoting from the report:
Limits in design processes and approaches can be as much of a barrier as issues with how AI collects and uses data. Included in this call is the need for AI that addresses learners with disabilities, learner variability, and the need for universal design for learning in AI applications.
Comment: I would have stated this goal somewhat differently. Every student is unique. Thus, no curriculum, pedagogy, assessment, or overall school environment is ideally designed to fit any specific individual student. A good education contains components of helping a student to learn to cope with what a school can provide, as well as with schools doing what they can do to meet the needs of individual students. We currently single out students with various disabilities and also those with various special gifts in order to make special provisions for them. We need to support more research, development, and implementation of progress that AI can bring to the individualization of instruction in situations that will be beneficial to students.
The statement about “the need for universal design for learning in AI applications” bothers me. It suggests that there are a number of design principles that all instructional materials making use of AI should be following. As an example, consider a statement such as “The content, pedagogy, and assessment of CAL materials should be free of bias.” But, we have not universally agreed on a definition of bias. You have heard the statement, “Beauty is in the eye of the beholder.” To a certain extent, each person has their own definition of bias.
5. Develop Stronger Policies for Ethics and Equity
Quoting from the report:
In the expert panel discussions, there was a clear need to rapidly intensify the work to understand what core standards, guidelines, policies and other forms of guidance are for effective, equitable, and ethical practices in this emerging area. Researchers doing the work have to participate in building the guidance that helps the field grow in a safe and credible manner.
Comment: The process of increasing the use of AI in education provides us with an opportunity to revisit many current educational practices and to explore how ethical they are. For example, how ethical is it to do the amount of separation we currently make between Special Education students and other students? Some countries use much less separation than we do. How ethical is it to place as much emphasis as we do on lock-stepping many students into the day-by-day curriculum content that is being presented to them?
6. Inform and Involve Educational Policy Makers and Practitioners
Quoting from the report:
To participate in making decisions, building capacity among practitioners to understand AI is important. Capacity building is also important so that educators have the infrastructure to test and evaluate emerging AI and so they can inform design decisions. Schools and other educational institutions may need incentives to get more involved in evaluation and policies. Policy makers are learning about AI in general, but may be less aware of specific risks and barriers in education that need policy attention.
Comment: The substantial use of AI in education should not be undertaken lightly. The people making the decision to implement more use of AI and the teachers actually doing the implementation need to understand what they are doing and be convinced that the decisions being made will be beneficial, both to students and to the overall educational system.
7. Strengthen the Overall AI and Education Ecosystem
Quoting from the report:
Experts saw strong ecosystems of educational leaders, innovators, researchers, industry leaders, start-up companies, and other stakeholders as an important mechanism for shaping AI for educational good. Many of the dark scenarios, in contrast, involved poor information sharing or imbalances of power—and ultimately, one industry player acting alone.
Experts also repeatedly called for more attention to building infrastructure for collaboration and techniques for partnerships among researchers, practitioners, policy makers, developers, industry, and other stakeholders.
Comment: The “One industry player acting alone” statement especially caught my attention. Schooling currently is impacted strongly by a number of people and organizations with relatively specific agendas. In addition, there are a modest number of publishers of instructional curriculum materials and the related training that control much of the market. I find Alphabet, Inc., which owns Google, the Google search engine, and a number of other companies to bean interesting example to explore. Google’s products and services are widely used in education. Google’s roles in the development of Chromebooks is part of this. Google’s search engine makes substantial use of AI as it both gathers, stores, and uses information about each of its users. I frequently use this free search engine, and in recent years I have seen more and more of my search results that are accompanied by ads. So, I am paying for the use of Google’s search engine by having to deal with the barrage of ads appearing in the search results presented to me, and also I must accept Google’s sale of information about me to a huge range of different companies. I do not think our school children should have to have this happening to them.
I am specifically concerned about the possibility (perhaps likelihood) that a very few publishers will come to dominate the development and sale of instructional curriculum materials and the related training.
I believe that the report being discussed here is quite weak in the area of assessing the possible impacts of AI on the content of the curriculum. For years I have asked and thought about the following question:
If a computer can solve or greatly help in solving a type of problem or accomplishing a type of task that we want students to learn about in school, how should curriculum, pedagogy, and assessment address this situation?
I initially only asked this question about math education, since quite early on huge progress was occurring in developing computer programs that can solve a very wide range of types of math problems. Very good software to accomplish such tasks is available free from a variety of sources. For one example, see WolframAlpha (WolframAlpha Intelligent Systems, 2021, link.)
This same question can be asked of every discipline now being taught in our schools. For example, what do we want students to learn in second language courses? Suppose the main goal in these courses is for students to gain modest skills in speaking, listening, reading, writing, and translating between their native language and the second language. We now have computer programs that are relatively good (and rapidly being improved) in these tasks.
A variation of my question is simply to ask, “What do we want students to memorize and what do we want to have them become skilled in retrieving from the Web?” This is a challenging question!
At the beginning of this newsletter, I quoted Marvin Minsky’s statement, “No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.” Here is an example that I find interesting. In my conscious problem-posing mind, I decide I want to move myself from one room in my apartment to another. My conscious mind then turns the problem over to my subconscious mind, and it directs my body in the details of performing the walking task.
As another example, I can tell my computer to send an email message I have just composed to a particular person whose name I keyboard into the “To” space. The computer takes over and the task is completed. In essence, with a computer there are more and more situations where the thought of a human becomes a deed of the computer. Interestingly, a current area of brain research is to develop a brain implant that can take a human thought such as, “Send the message I have just keyboarded to my daughter Beth,” and a computer will handle this task.
The educational ramifications of this line of thought already are profound, even without the futuristic concept of thought-input to a computer. I currently must keyboard my instructions to solve the problem, but I do not need to provide the details on how to solve the problem. The computer “knows” how to solve the problem, and automatically does so. More and more problems are being handled by this process (Metz, 8/31/2020, link).
In the years to come, we will be making huge changes to the curriculum, pedagogy, and assessment in our schools. This will be a long, difficult process, and will be going on during a time when our available technology is continuing its rapid change.
Many of the people involved in our current schooling system will have trouble adjusting to these changes. It is likely that the transitions will be quite disruptive and disturbing to many parents, teachers, school administrators, school board members, politicians, and others. Patience and tolerance will be essential. Remember, the goal is to do as well as we can in providing an education that will serve our children effectively during their childhood while they are gaining this education, and in their adulthood in our changing world.
I will leave you now with one more exceedingly important thought about education. The great Greek philosopher Aristotle once said, “Give me a child until he is 7 and I will show you the man.” Much more recently, B.F. Skinner said, “Give me a child, and I’ll shape him into anything.” We have known for more than 2,000 years the importance of education from birth through age seven or eight. Formal education during these years currently is coming from a combination of parents, guardians, child care providers, preschools, and the earliest school grades. Informal education comes from friends, picture books, and online media. Television, especially, has come to have a significant impact on the early education of children, and now computer games and edutainment have become quite important in the lives of many young children.
Hmm. I wonder how long it will be before robots begin to play a significant role in child care and rearing.
Bhatnagar, A. (2020). Artificial intelligence explained in 5 minutes! (video: 5:30.) vimeo. Retrieved 1/19/2021 from https://vimeo.com/344248922.
Metz, R. (8/31/2020). Elon Musk shows off a working brain implant — in pigs. CNN Business. Retrieved 1/23/2021 from https://www.cnn.com/2020/08/28/tech/elon-musk-neuralink/index.html.
Rank, M.R. (1/13/2021). Mesopotamian education and schools. History on the Net. Retrieved 1/13/2021 from https://www.historyonthenet.com/mesopotamian-education-and-schools.
Roschelle, J., Lester, J., & Fusco, J. (eds.) (2020). AI and the future of learning: Expert panel report. Digital Promise. Retrieved 12/13/2020 from https://circls.org/wp-content/uploads/2020/11/CIRCLS-AI-Report-Nov2020.pdf.
Wikipedia (2021a). Dartmouth workshop. Retrieved 1/22/2021 from https://en.wikipedia.org/wiki/Dartmouth_workshop#:~:text=The%20Dartmouth
%20Summer%20Research%20Project%20on%20Artificial%20Intelligence,weeks
%20and%20was%20essentially%20an%20extended%20brainstorming
%20session.
Wikipedia (2021b). Lebombo bone. Retrieved 1/21/2021 from https://en.wikipedia.org/wiki/Lebombo_bone.
Wolfram Alpha (2021). WolframAlpha Intelligent Systems. Retrieved 1/22/2021 from https://www.wolframalpha.com/.
David Moursund is an Emeritus Professor of Education at the University of Oregon, and editor of the IAE Newsletter. His professional career includes founding the International Society for Technology in Education (ISTE) in 1979, serving as ISTE’s executive officer for 19 years, and establishing ISTE’s flagship publication, Learning and Leading with Technology (now published by ISTE as Empowered Learner). He was the major professor or co-major professor for 82 doctoral students. He has presented hundreds of professional talks and workshops. He has authored or coauthored more than 60 academic books and hundreds of articles.