To recap, we now have some characterisations of AI, so that when an AI problem arises, you will be able to put it into context, find the correct techniques and apply them. We have introduced the agents language so that we can talk about intelligent tasks and how to carry them out. We have also looked at search in the general case, which is central to AI problem solving. Most pieces of software have to deal with data of some type, and in AI we use the more grandiose title of "knowledge" to stand for data including (i) facts, such as the temperature of a patient (ii) procedures, such as how to treat a patient with a high temperature and (iii) meaning, such as why a patient with a high temperature should not be given a hot bath. Accessing and utilising all these kinds of information will be vital for an intelligent agent to act rationally. For this reason, knowledge representation is our final general consideration before we look at particular problem types.

To a large extent, the way in which you organise information available to and generated by your intelligent agent will be dictated by the type of problem you are addressing. Often, the best ways of representing knowledge for particular techniques are known. However, as with the problem of how to search, you will need a lot of flexibility in the way you represent information. Therefore, it is worth looking at four general schemes for representing knowledge, namely logic, semantic networks, production rules and frames. Knowledge representation continues to be a much-researched topic in AI because of the realisation fairly early on that how information is arranged can often make or break an AI application.

4.1 Logical Representations

If all human beings spoke the same language, there would be a lot less misunderstanding in the world. The problem with software engineering in general is that there are often slips in communication which mean that what we think we've told an agent and what we've actually told it are two different things. One way to reduce this, of course, is to specify and agree upon some concrete rules for the language we use to represent information. To define a language, we need to specify the syntax of the language and the semantics. To specify the the syntax of a language, we must say what symbols are allowed in the language and what are legal constructions (sentences) using those symbols. To specify the semantics of a language, we must say how the legal sentences are to be read, i.e., what they mean. If we choose a particular well defined language and stick to it, we are using a logical representation.

Certain logics are very popular for the representation of information, and range in terms of their expressiveness. More expressive logics allow us to translate more sentences from our natural language (e.g., English) into the language defined by the logic.

Some popular logics are:

This is a fairly restrictive logic, which allows us to write sentences about propositions - statements about the world - which can either be true or false. The symbols in this logic are (i) capital letters such as P, Q and R which represent propositions such as: "It is raining" and "I am wet", (ii) connectives which are: and (), or (), implies () and not (), (iii) brackets and (iv) T which stands for the proposition "true", and F which stands for the proposition "false". The syntax of this logic are the rules specifying where in a sentence the connectives can go, for example must go between two propositions, or between a bracketed conjunction of propositions, etc.

The semantics of this logic are rules about how to assign truth values to a sentence if we know whether the propositions mentioned in the sentence are true or not. For instance, one rule is that the sentence PQ is true only in the situation when both P and Q are true. The rules also dictate how to use brackets. As a very simple example, we can represent the knowledge in English that "I always get wet and annoyed when it rains" as:

It is raining I am wet I am annoyed

Moreover, if we program our agent with the semantics of propositional logic, then if at some stage, we tell it that it is raining, it can infer that I will get wet and annoyed.

This is a more expressive logic because it builds on propositional logic by allowing us to use constants, variables, predicates, functions and quantifiers in addition to the connectives we've already seen. For instance, the sentence: "Every Monday and Wednesday I go to John's house for dinner" can be written in first order predicate logic as:

X ((day_of_week(X, monday) day_of_week(X, wednesday))
(go_to(me, house_of(john)) eat_meal(me, dinner))).

Here, the symbols monday, wednesday, me, dinner and john are all constants: base-level objects in the world about which we want to talk. The symbols day_of_week, go_to and eat_meal are predicates which represent relationships between the arguments which appear inside the brackets. For example in eat_meal, the relationship specifies that a person (first argument) eats a particular meal (second argument). In this case, we have represented the fact that me eats dinner. The symbol X is a variable, which can take on a range of values. This enables us to be more expressive, and in particular, we can quantify X with the 'forall' symbol , so that our sentence of predicate logic talks about all possible X's. Finally, the symbol house_of is a function, and - if we can - we are expected to replace house_of(john) with the output of the function (john's house) given the input to the function (john).

The syntax and semantics of predicate logic are covered in more detail as part of the lectures on automated reasoning.

In first order predicate logic, we are only allowed to quantify over objects. If we allow ourselves to quantify over predicate or function symbols, then we have moved up to the more expressive higher order predicate logic. This means that we can represent meta-level information about our knowledge, such as "For all the functions we've specified, they return the number 10 if the number 7 is input":

f, (f(7) = 10).

The Meta-level

Suppose we can specify a set of tools to help us work with (represent, think about, etc.) some objects we are interested in. If we can use the same (or some other) tools to work in a similar way with the tools themselves, then we are working at the meta-level. In predicate logic, we use functions, predicates and quantifiers to express information about a set of objects (for example, john). In higher order logic, we use quantifiers to express information about functions and predicates, so we have in some sense moved to a meta-level.

An inability to work at meta-levels is often used as a criticism of intelligent agents, and many people see meta-level reasoning as a key part of intelligence. In particular, Bruce Buchanan, when giving the keynote speech at the largest AI conference in 2000, spoke about creativity at the meta-level, as he believes that the only way for agents to be creative is to have access to information at meta-levels. See the slides from Buchanan's keynote speech HERE.

In the logics described above, we have been concerned with truth: whether propositions and sentences are true. However, with some natural language statements, it's difficult to assign a "true" or "false" value. For example, is the sentence: "Prince Charles is tall" true or false? Some people may say true, and others false, so there's an underlying probability that we may also want to represent. This can be achieved with so-called "fuzzy" logics. The originator of fuzzy logics, Lotfi Zadeh, advocates not thinking about particular fuzzy logics as such, but rather thinking of the "fuzzification" of current theories, and this is beginning to play a part in AI. The combination of logics with theories of probability, and programming agents to reason in the light of uncertain knowledge are important areas of AI research. Various representation schemes such as Stochastic Logic Programs have an aspect of both logic and probability.

Other logics you may consider include:

Multiple valued logics, where different truth value such as "unknown" are allowed. These have some of the advantages of fuzzy logics, without necessarily worrying about probability.

Modal logics, which cater for individual agents' beliefs about the world. For example, one agent could believe that a certain statement is true, but another may not. Modal logics help us deal with statements that may be believed to be true to some, but not all agents.

Temporal logics, which enable us to write sentences involving considerations of time, for example that a statement may become true some time in the future.

It's not difficult to see why logic has been a very popular representation scheme in AI:

4.2 Semantic Networks

I'm sure many of you will have drawn diagrams in order to clarify your thoughts in some way. These may have represented causal information such as:

and relationships between ideas:

This offers evidence (and there is much more from psychological studies), that humans tend to store and manipulate knowledge in terms of associations and hierarchies, rather than in terms of lists of statements in some logic. This gives us the starting point for ways of representing knowledge in graphical networks.

Graphs are very easy to store inside programs because they can be succinctly represented with nodes and edges. However, if they are going to be of any use to our agents, then we will need to impose some formalism. To see why, suppose our agent wants to work out approximate age differences between people, and has some information about a family represented thus:

If we tell our agent that Jason is 25 years younger than Bryan, who is 30 years younger than Arthur, who is 5 years older than Jim, then this information will be of little use in telling us the rough age difference between Jason and Julia. If instead, we had arranged the knowledge of the family relationships graphically like this:

Then we can see how our agent could guess at an age of 20 years between Jason and Julia, because the links between the nodes are the same.

This highlights a big problem with concept networks: because the links between nodes can be so arbitrary, we have to work hard at formalising things before we can use the graphs in intelligent tasks. Any such formalism which aims to capture semantics graphically is called a semantic network. Mostly with the goal of representing natural language sentences graphically in mind, many formalisms for concept networks have been introduced in AI, in particular by Roger Schank. He introduced conceptual dependency theory which managed to narrow down the labels for edges in graphs to just a few possibilities. The advantage to this scheme is that, when reduced to graphical form, two sentences which have the same meaning are represented with identical graphs. Unfortunately, it is still not clear whether a program can reliably reduce natural language sentences to the conceptual dependency format.

A more recent semantic network scheme is given by: Conceptual Graphs. These were introduced by John Sowa, and are discussed in the Luger and Stubblefield AI textbook. Each conceptual graph represents a single proposition such as "my dog is called spot" or "all buildings have windows". They have concept nodes, which can represent concrete concepts which we can visualise, such as "restaurant" or "dog", or "my dog spot". We have little trouble making an image of concepts such as "my dog spot", but equally we can visualise generic concepts such as a dog. Concept nodes can also represent abstract concepts such as "anger" which we may not be able to visualise.

Conceptual graphs do not use labels on their arcs for describing relationships between concepts. Instead, they put in an extra node between two related concepts. These extra nodes are called conceptual relations, and we usually draw them with oval borders in conceptual graphs, as opposed to rectangles for concept nodes. An advantage to using conceptual relations rather than labelling arcs is that it is easier to represent relationships between more than two concepts. A single relationship between multiple individuals, such as in the proposition "James, John and Jack are brothers" can be represented by a single conceptual relation node with multiple arcs, for example like this:

There are further considerations about types, hierarchies and propositions which add further formality to this representation framework.

4.3 Production Rule Representations

Another way to represent knowledge is as a set of production rules. These are condition-action pairs which define a condition which, if satisfied in a certain situation, causes the production rule to "fire", i.e., the action to be carried out. In terms of the previous lecture on search, production rules use the current state in the search to check a condition, and if the condition is satisfied, the action part of the production rule chooses which operator to use, and carries out the operation.

For example, a set of production rules for the task of getting home might be:

  1. If waiting at the bus stop and the bus turns up, then get on the bus
  2. If on the bus, then pay the driver
  3. If on a bus and have just paid the driver, then find a seat

and so on. In this case, there is only one production rule for each situation. In the general case, however, many production rules will have similar conditions which may be met by the same situation. The set of production rules for which the conditions are met in a particular situation are called the conflict set. When an agent chooses a member of the conflict set in a situation, and fires that production rule, this is called conflict resolution. Of course, the strategy for choosing production rules from the conflict set is the search strategy, and an agent may employ heuristics to improve the intelligence behind the selection.

Naturally, the action prescribed by the chosen production rule is likely to change the state of the world (the agent's situation). This means that the current conflict set may no longer be appropriate, so a new one must be compiled. This continues until the goal state has been reached. Any system which uses production rules in such a recognize-act cycle is called a production system. The production rule representation has been used to design AI programming languages, such as the CLIPS language written at NASA.

4.4 Frame Representations

When an agent encounters a new situation, it will need to retrieve information in order for it to act rationally in that situation. This information is likely to be multi-faceted and hierarchical, and one way of structuring the knowledge is in terms of frames. These are frameworks consisting of slots, with each slot containing information in various representations, including logical sentences and production rules. A slot can also contain another frame, which gives us a hierarchy.

Each framework represents a stereotypical object or situation and can be recalled whenever an agent encounters an object or situation which roughly fits the stereotype. After retrieving an appropriate frame, an agent will adapt it by changing some of the defaults, filling in blanks, etc. Some of the information is procedural, so that when a blank is filled in with certain values, a procedure must be carried out. Hence, not only will the frame provide a way of storing information about what is currently happening, it can also be used to dictate how to act rationally in that situation.

To make the frame representation as flexible as possible, different types of information are allowed in the slots. These include:

  1. Information for choosing the frame. This might be a name, or id number. It may also be information about situations in which this frame should be retrieved, or some descriptors for the stereotype the frame represents. For example, a frame for a table might give some physical specifications and if a new object fits those specifications to a degree, then the frame should be retrieved.
  2. Information about relationships between this frame and others, for example, whether this frame is a generalisation or specialisation of another frame, or whether two frames should never be considered at the same time.
  3. Procedures which should be carried out after various slots have been filled in. These procedures could include: filling in particular values in other slots, retrieving other frames, or any rational action an agent should do in a situation where a particular value for a slot has been identified.
  4. Default information. In situations where certain information required for the frame is missing, defaults can be specified. For instance, a table may be assumed to be wooden until this information can be ascertained. Default information is used in choosing actions until more specific information is found.
  5. Blank slots. These are flagged to be left blank unless required for a particular task. For example, in a frame for lectures, the room of the lecture may be irrelevant for taking notes, but become important when planning how to get to lunch afterwards.

Frames are extensions of the traditional 'record' datatype used in databases. If you choose to program your agents in an object-oriented language such as Java or C++, then you will be able to represent knowledge as objects, which are very similar to frame structures.

Marvin Minsky

Marvin Minsky is another of the founding fathers of Artificial Intelligence. He was one of the first, and strongest, supporters of frame schemes for AI representations. He was also the first person to publish about learning in neural networks (in his PhD thesis), and he also built the first neural network simulator, called SNARC. His other inventions include the confocal scanning microscope, the Muse synthesizer and the first LOGO turtle. He currently works on the problem of commonsense reasoning in Artificial Intelligence.

His home page is here.

As an example of using frames, suppose an agent is taking notes at a lecture and wants to decide how much attention to pay and determine any other ways in which it should behave. It searches for frames which match the given situation: it is in a meeting of some kind, so it retrieves that frame. In the specialisations slot of meetings is another frame, lecture, which is more appropriate because the context for that is a large number of students. It retrieves the lecture frame and starts filling in slots.

The first slot is the name of the course, which in this case is operating systems. The next slot is the level of the course, and it's difficult. This fires the procedural rule: "if it's a difficult course, pay attention", so the agent begins to pay more attention. The next slot is lecturer, and this is a frame in itself, so the agent retrieves the lecturer frame and starts filling in the slots on that frame. The first slot is tolerance, and this lecturer is not tolerant. This fires more procedural rules, such as "if it's an intolerant lecturer, then turn off your mobile phone", so the agent turns his phone off. Having dealt with the lecturer frame, it returns to the lecture frame and looks at the next slot, which is the room number. This is flagged to be not important for the task of taking notes, so the agent doesn't fill it in. The frames in this example are portrayed below:

We can see how this scheme of retrieving frames, filling in slots and reacting to production rules in the slots can be used to make an agent act rationally. Note that search may be involved in order to use the frames representation: both in order to find the correct frames for a situation, and as part of procedures carried out when filling the slots.