Outline Notes: Artificial Intelligence
2021-05-27
Some outline notes for my lectures about digital innovation, delivered to first year students in the School of Computing at Dublin City University.
Introduction
Overview
Artificial Intelligence is a technology that has revolutionized the world, from the fields of finance, advertising, medicine, entertainment, education and so on; it has become quite popular. You should consider learning about the techniques before you learn about its applications.
If you're just a beginner in AI, then you should consider learning about reinforcement learning. This is a very useful technique for tackling problems you may be unfamiliar with -- but it can be used regardless of whether you're familiar with the problem domain.
Reinforcement learning is a central concern for the field. During the last few years, there has been an intense effort to develop more data-driven algorithms for solving games that do not rely on hand-labelled training data.
These algorithms use deep learning methods to improve the recognition performance. Deep learning learns to predict the target objects from the raw data instead of using handlabelling. They all predict the target in the same manner: it is a kind of generative model that learns the relationships between the data points.
More about this section if you're confused.
What is Intelligence?
Aritificial Intelligence. I think of Lt. Cmdr. Data, KITT, HAL, the Cylons, Skynet, Mycroft, Johnny 5, Talos but also AlphaGo, Google Search, the assistants Apple's Siri, Amazon Alexa, Google Assistant and Microsoft's Cortana, Boston Dynamics, as well as automated cars, aircraft, Roomba and all the way down to my Casio watch.
So, my idea of artificial intelligence seems to span the spectrum between human-like self-awareness - which is as yet science fiction - and basic electronic devices - my Casio watch. I like Max Tegmark's definition that intelligence is the ability to accomplish complex goals; it's a good one.
From the Future of Life Institute:
Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.
https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
From Hans Moravec in When will computer hardware match the human brain?:
Computers are universal machines, their potential extends uniformly over a boundless expanse of tasks. Human potentials, on the other hand, are strong in areas long important for survival, but weak in things far removed. Imagine a "landscape of human competence," having lowlands with labels like "arithmetic" and "rote memorization", foothills like "theorem proving" and "chess playing," and high mountain peaks labeled "locomotion," "hand-eye coordination" and "social interaction." We all live in the solid mountaintops, but it takes great effort to reach the rest of the terrain, and only a few of us work each patch.
Advancing computer performance is like water slowly flooding the landscape. A half century ago it began to drown the lowlands, driving out human calculators and record clerks, but leaving most of us dry. Now the flood has reached the foothills, and our outposts there are contemplating retreat. We feel safe on our peaks, but, at the present rate, those too will be submerged within another half century. I propose (Moravec 1998) that we build Arks as that day nears, and adopt a seafaring life! For now, though, we must rely on our representatives in the lowlands to tell us what water is really like.
AlphaGo - The Movie is worth watching, not least for the observations. One interesting one is how the designers and programmers of AlphaGo did not see it as a coherent entity, rather a collection of programs designed to achieve a particular goal, i.e. win at the Chinese game of Go.
Finally, Alan Turing's assertion that:
The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
From I.— Computing Machinery and Intelligence by Alan Turing.
Development
A Brief Overview
See The History of Artificial Intelligence, Rockwell Anyoha.
- Start with Turing's I.— Computing Machinery and Intelligence.
- "Good Old-Fashioned AI": attempts to model intelligence symbolically, in the same way that a computer represents objects with 1s and 0s, writing code that represents objects in the real world and creating complexity through interconnection. 1980s expert systems that solve problems in a programmatic If/Else approach fall into this cateory. See GOFAI.
- Machine Learning: developed from the 1980s onwards, machine learning techniques include Artificial Neural Networks, computing systems partly inspired by biology, and many probabilistic algorithms. Machine learning can be divided broadly into Supervised, Unsupervised and Reinforcement learning approaches.
- Although these techniques are becoming more sophisticated all the time, some of the main drivers in the recent AI revolution have been access to cheap and abundant memory/storage, huge datasets and fast processing, thanks to the internet and advances in hardware.
- Online platforms offer access to AI APIs, e.g. Google's Cloud AI.
- Early "AI" chatbot program, ELIZA, demonstrated people's willingness to engage with a humanlike interface. See ELIZA: a very basic Rogerian psychotherapist chatbot
Areas of Application
In almost every domain, but most visibly in:
- Interfaces, assistants, natural language processing.
- Automated systems, e.g. robotics, self-driving cars. See Tesla, Cadillac's Super Cruise and Waymo.
- Automated decision making (ADM) and analysis, e.g. image recognition, pattern recognition etc.
OpenAI
Overview
From https://openai.com/about/
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
GPT-3
From Wikipedia's article about GPT-3 :
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2).
Language Models are Few-Shot Learners, Brown et al. 2020.
[W]e find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
From https://openai.com/blog/gpt-3-apps/:
Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
Nine months since the launch of our first commercial product, the OpenAI API, more than 300 applications are now using GPT-3, and tens of thousands of developers around the globe are building on our platform. We currently generate an average of 4.5 billion words per day, and continue to scale production traffic.
Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language. Developers can “program” GPT-3 by showing it just a few examples or “prompts.” We’ve designed the API to be both simple for anyone to use but also flexible enough to make machine learning teams more productive.
Additional Natural Language Processing
DistilGPT-2
From DistilGPT-2. See DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
Sample output:
AI, or Artificial Intelligence, is a form of automation that allows machines to learn . In a nutshell , an AI can be taught to do something by being given an example of how to do it . For example , when an AI learns how to play a game of chess, it has to be shown how to learn to play. For example, when an AI learns how to play an artificial intelligence, it has to be shown how to play. If it isn't trained for something, the AI can't learn to play it. For example, when an AI learns how to play, it has to be shown how to play. The AI must learn to do something, rather than just learn it. It must learn to think and learn.
ArXiv NLP
From ArXiv NLP model. See DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
Sample output:
AI, or Artificial Intelligence, is a form of automation that allows machines to learn . In a nutshell , an AI can be taught to do something by being given an example of how to do it. Awareness is measured by how well the machine learns to understand the utterances. AI is designed to learn the most important parts of an utterance. As the machine learns to understand the utterances, the information can be leveraged to improve the machine (see Advances in Neural Machine Translation (NMT)), and then used to perform a manual translation.
EleutherAI (NLP)
Overview
From https://www.eleuther.ai/faq/
General
Q: How did this all start?
A: On July 3rd, 2020, Connor Leahy (@Daj) posted in the TPU Podcast Discord:
| https://arxiv.org/abs/2006.16668
| Hey guys lets give OpenAI a run for their money like the good ol' days
To which Leo Gao (@bmk) replied:
| this but unironically
And so it began.
EleutherAI GPT-Neo-2.7B
From EleutherAI GPT-Neo. See GPT-Neo.
The overview subsection to these notes was entirely written by an AI system. It was EleutherAI GPT-Neo-2.7B. My writing is (sometimes) better.
The Future?
AI will influence everything, eventually.
Superintelligent AI May Be Impossible to Control; That's the Good News
Postcard from the 23rd century: Not even possible to know if an AI is superintelligent, much less stop it.
Charles Q. Choi. 2021.
Epilogue
In the main lecture, we concluded by thinking about the future, where AI will be a part of everything. Some additional thoughts:
- Explainability:
- It's worth reading this article, Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models, to get a sense of how expectations about AI have been at times too high and where we are at the moment with machine learning.
- Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities, Christian Meske et al, December 2020. The authors argue that recent machine learning developments are not easily explainable.
- And some of the principal mistakes made with machine learning implementations are outlined at Pitfalls to Avoid when Interpreting Machine Learning Models, Christoph Molnar et al, July 2020.
-
Pedro Domingos of the University of Washington has some interesting thoughts, recorded in summary here; he breaks machine learning into five "paradigms". These overlap with the three broad areas of structured learning, unstructured learning and reinforcement learning that we talked about in the main lecture. The five paradigms worth reading about are:
- Rule based learning, i.e. decision trees, random forests etc.
- Connectivism, i.e. artificial neural networks etc.
- Bayesian, i.e. naive bayes, bayesian networks, probabilistic graphical models etc.
- Analogy, i.e. such as the k-nearest neighbour (KNN) and support vector machine algorithms.
- Unsupervised Learning, i.e. clustering, dimensionality reduction etc.
-
AI Singularity. This is something that I didn't mention explicitly in the lecture but which seems to be an accepted end point these days. It seems probable. When an AI system exceeds human capabilities, it is likely to do so at a tremendous rate of progress. Within a short span of time, the system's intelligence would far surpass humanity's. Many people have written about this, including Max Tegmark and Ray Kurzweil and in very many science fiction treatments including The Matrix etc. Often, it ends badly for humans! But maybe it doesn't have to be this way. Some ideas include:
- Singularity, xkcd
- Predictions by Ray Kurzweil
“By 2029, computers will have human-level intelligence,” Kurzweil said. Singularity is that point in time when all advances in technology, particularly in artificial intelligence, will lead to machines smarter than human beings.
- AI Singularity and the Growing Risk of Surprise: Lessons from the IDF’s Strategic and Operational Learning Processes, 2014-2019, Meir Finkel.
References
- Write With Transformer at Hugging Face
- Hugging Face, The AI community building the future
- OpenAI, Discovering and enacting the path to safe artificial general intelligence
-
OpenAI’s text-generating system GPT-3 is now spewing out 4.5 billion words a day
Robot-generated writing looks set to be the next big thing
https://www.theverge.com/2021/3/29/22356180/openai-gpt-3-text-generation-words-day -
This AI Can Generate Convincing Text—and Anyone Can Use It
The makers of Eleuther hope it will be an open source alternative to GPT-3, the well-known language program from OpenAI.
https://www.wired.com/story/ai-generate-convincing-text-anyone-use-it/ - Life 3.0, Max Tegmark. Allen Lane (Penguin), 2017
- Future of Life Institute
This page was last rendered on June 29, 2023.