Niall McMahon

Search by DuckDuckGo

Outline Notes: Artificial Intelligence


Some outline notes for my lectures about digital innovation, delivered to first year students in the School of Computing at Dublin City University.



Artificial Intelligence is a technology that has revolutionized the world, from the fields of finance, advertising, medicine, entertainment, education and so on; it has become quite popular. You should consider learning about the techniques before you learn about its applications.

If you're just a beginner in AI, then you should consider learning about reinforcement learning. This is a very useful technique for tackling problems you may be unfamiliar with -- but it can be used regardless of whether you're familiar with the problem domain.

Reinforcement learning is a central concern for the field. During the last few years, there has been an intense effort to develop more data-driven algorithms for solving games that do not rely on hand-labelled training data.

These algorithms use deep learning methods to improve the recognition performance. Deep learning learns to predict the target objects from the raw data instead of using handlabelling. They all predict the target in the same manner: it is a kind of generative model that learns the relationships between the data points.

More about this section if you're confused.

What is Intelligence?

Aritificial Intelligence. I think of Lt. Cmdr. Data, KITT, HAL, the Cylons, Skynet, Mycroft, Johnny 5, Talos but also AlphaGo, Google Search, the assistants Apple's Siri, Amazon Alexa, Google Assistant and Microsoft's Cortana, Boston Dynamics, as well as automated cars, aircraft, Roomba and all the way down to my Casio watch.

So, my idea of artificial intelligence seems to span the spectrum between human-like self-awareness - which is as yet science fiction - and basic electronic devices - my Casio watch. I like Max Tegmark's definition that intelligence is the ability to accomplish complex goals; it's a good one.

From the Future of Life Institute:

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

From Hans Moravec in When will computer hardware match the human brain?:

Computers are universal machines, their potential extends uniformly over a boundless expanse of tasks. Human potentials, on the other hand, are strong in areas long important for survival, but weak in things far removed. Imagine a "landscape of human competence," having lowlands with labels like "arithmetic" and "rote memorization", foothills like "theorem proving" and "chess playing," and high mountain peaks labeled "locomotion," "hand-eye coordination" and "social interaction." We all live in the solid mountaintops, but it takes great effort to reach the rest of the terrain, and only a few of us work each patch.
Advancing computer performance is like water slowly flooding the landscape. A half century ago it began to drown the lowlands, driving out human calculators and record clerks, but leaving most of us dry. Now the flood has reached the foothills, and our outposts there are contemplating retreat. We feel safe on our peaks, but, at the present rate, those too will be submerged within another half century. I propose (Moravec 1998) that we build Arks as that day nears, and adopt a seafaring life! For now, though, we must rely on our representatives in the lowlands to tell us what water is really like.

AlphaGo - The Movie is worth watching, not least for the observations. One interesting one is how the designers and programmers of AlphaGo did not see it as a coherent entity, rather a collection of programs designed to achieve a particular goal, i.e. win at the Chinese game of Go.

Finally, Alan Turing's assertion that:

The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

From I.— Computing Machinery and Intelligence by Alan Turing.


A Brief Overview

See The History of Artificial Intelligence, Rockwell Anyoha.

Areas of Application

In almost every domain, but most visibly in:




OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.


From Wikipedia's article about GPT-3 :

Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2).

Language Models are Few-Shot Learners, Brown et al. 2020.

[W]e find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.


Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
Nine months since the launch of our first commercial product, the OpenAI API, more than 300 applications are now using GPT-3, and tens of thousands of developers around the globe are building on our platform. We currently generate an average of 4.5 billion words per day, and continue to scale production traffic.
Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language. Developers can “program” GPT-3 by showing it just a few examples or “prompts.” We’ve designed the API to be both simple for anyone to use but also flexible enough to make machine learning teams more productive.

Additional Natural Language Processing


From DistilGPT-2. See DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.

Sample output:

AI, or Artificial Intelligence, is a form of automation that allows machines to learn . In a nutshell , an AI can be taught to do something by being given an example of how to do it . For example , when an AI learns how to play a game of chess, it has to be shown how to learn to play. For example, when an AI learns how to play an artificial intelligence, it has to be shown how to play. If it isn't trained for something, the AI can't learn to play it. For example, when an AI learns how to play, it has to be shown how to play. The AI must learn to do something, rather than just learn it. It must learn to think and learn.


From ArXiv NLP model. See DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.

Sample output:

AI, or Artificial Intelligence, is a form of automation that allows machines to learn . In a nutshell , an AI can be taught to do something by being given an example of how to do it. Awareness is measured by how well the machine learns to understand the utterances. AI is designed to learn the most important parts of an utterance. As the machine learns to understand the utterances, the information can be leveraged to improve the machine (see Advances in Neural Machine Translation (NMT)), and then used to perform a manual translation.

EleutherAI (NLP)



Q: How did this all start?
A: On July 3rd, 2020, Connor Leahy (@Daj) posted in the TPU Podcast Discord:
| Hey guys lets give OpenAI a run for their money like the good ol' days
To which Leo Gao (@bmk) replied:
| this but unironically
And so it began.

EleutherAI GPT-Neo-2.7B

From EleutherAI GPT-Neo. See GPT-Neo.

The overview subsection to these notes was entirely written by an AI system. It was EleutherAI GPT-Neo-2.7B. My writing is (sometimes) better.

The Future?

AI will influence everything, eventually.

Superintelligent AI May Be Impossible to Control; That's the Good News
Postcard from the 23rd century: Not even possible to know if an AI is superintelligent, much less stop it.
Charles Q. Choi. 2021.


In the main lecture, we concluded by thinking about the future, where AI will be a part of everything. Some additional thoughts: