“I am currently standing 2 m from the wall”

and other completely useless information!

Hi everyone! I wanted today to talk a bit about communication, and more precisely purposeful communication. You’d be up? If so keep reading. If not… well see you next time maybe 🙂

This is the first article of a series on the topic of multi-agent communication, and more precisely humans/artificial agents communication. In this article, we are going to cover the basics and what the problem is, and in future articles, we will see each part more in details and check on some of the models I am using to solve this problem.

Purposeful Communication

We all have a rather good idea of what “communication” is. It is the act of exchanging information between two or more persons. But do you know what we call “purposeful communication”? It is a term rather hyped in the leadership/management world, and used to describe communications “that matter”. Purposeful communicators are said to be efficient because they think about the “why” behind each act of communication, thus making it efficient and impactful.

In Artificial Intelligence, the term is rather new and not very well spread, but it is used to describe a similar idea, which is that artificial agents can choose to communicate when it helps them reaching a goal.

Currently, the most common and used communication systems are conversational agents. You all know about the chatbots on these online stores, which pop up when you open the page with a “How can I help you?” sometimes a bit intrusive. These are a perfect example of non-purposeful communication. The bot itself has no goal: it simply reacts to sentences you write in the best way possible. Very often, it does not even have a memory of what it told you before.

Other types of conversational agents exist, more advanced and for different purposes (training, medical information, psychological help…). For instance, if you never heard about Woebot1, it is a conversational agent that is designed to help you learn Cognitive Behavioral Therapy tools to help you dealing with anxiety and stress and improve your mood. It is a very nice little bot!

The Woebot AI

But all these agents have in common the lack of “goal” in their intelligence. Or more precisely, their only goal is to talk with you. Maybe the designer of these artificial agents had a goal when they created them (for instance for Woebot to help you with these issues) but this goal is not encoded per sei in the agent and the agent itself is not planning any specific action to reach a specific goal. It is following a pre-defined strategy.

And as you might guess, purposeful communication is then dealing with communication for a goal. This goal can be to rescue a victim after a natural disaster, to help an elderly person clean their apartment, to inform the user of an autonomous car about a traffic jam to allow them to arrive faster at their destination… Some systems are already doing so some purposeful communication. For instance, when your GPS informs you that there is a faster route and asks you if you want to take it, it is purposeful communication.

Purposeful communication implies communication planning. It implies that the agent has to plan for and with the act of communication by evaluating its relevance.

Content, Timing, and Medium

As an example for the rest of the article, let us consider the following problem:

Alice is from Portugal and she is traveling in Sweden during the summer. Bad luck, this year in Sweden, it is raining a lot during the summer2. Her friend Bob is supposed to join her several days later. After two days in Sweden, when she calls her friend Bob, Alice knows that Bob will be expecting that the weather is very nice3. After all, it’s summer. Alice will therefore naturally explain to Bob, without Bob having to ask, that the weather is currently pretty bad so that Bob knows he should pack some rain jacket. At the same time, Alice’s friend, Carol, is living in Sweden and has been experiencing the rain. Therefore, when Alice will call Carol to ask her when they can meet, she is not going to tell her about the rain as she assumes Carol knows.

Illustration of the communication problem between Alice, Bob and Carol

This example is pretty simple, but illustrate very well something we, as humans, do very often. We choose which information to give to our partners depending on what we think is relevant for them (and us).

There are three very important aspects when we discuss the relevance of an act of communication:

  1. Content
  2. Timing
  3. Medium

In our example, the content of the communication between Alice and Bob is the fact that the weather is pretty bad. The reason why the content of a communication is important seems quite obvious: if Alice would tell Bob “I am currently standing right in front of a wall”, that would be pretty useless, wouldn’t it? In our case however, Alice gives some information to Bob that is going to change Bob’s future plans (he is going to put a jacket in his suitcase).

The timing of the communication is when the communication is performed and is very important as well. A piece of information which is very relevant at a given time might become completely useless later. For instance, if Alice would tell Bob “It is raining so much here” after Bob arrives at the Stockholm Airport, that would render the communication much less relevant as Bob could not take a rain jacket at home anymore.

Finally, an aspect which is often forgotten in artificial communication systems is the medium used to convey the communication. For instance, Alice could call Bob on the phone and talk, send him a message through Facebook Messenger or use some Morse code. The three media would convey the same content at the same time, but with very different results. Bob would probably pick up the phone directly and therefore receive the communication quickly. However, Bob does not look often at Facebook and might miss the message, delaying the communication or rendering it useless. And Bob does not know Morse code at all, which would make it very difficult for him to parse such a message, rendering it useless again4

The importance of medium. Cartoon by Royston -Robertson-

Planning with communication

So far I have been talking about “planning for communication” and deciding about what to communicate in order to inform. But communication planning is also about planning with communication. It means that I might ask my neighbor to open the window because it’s currently too warm in the room, or a robot might ask another agent to open the door for it because its arms are full. Planning with communication means using communication actions as a way to reach the goal. It does not mean that I am incapable of opening the window myself or that the robot is incapable of opening the door. It means that at this specific time of the plan, it is better for me to ask someone else to do something for me5.

Planning with communication is still a very new topic in multi-agent planning, especially under uncertainty6. It requires the agent to be able to detect what it needs and which of its partners can provide the service. It also requires the agent to weigh whether it should ask or do it itself. And in addition to all of that, it requires the agent to mix it with the rest of its plan.

A word of conclusion

So maybe this article was not very groundbreaking for you. After all, planning for and with communication is something that we, as humans, are doing every day, and often without even subconsciously. But it is something rather new for artificial agents, and out of this whole idea I am currently working on three research aims:

  1. Purposeful communication planning under uncertainty. This part aims at building a model allowing an agent to do all the planning for communication we mentioned under the state and action uncertainty assumption. This means that the agent can only observe its environment partially and the exact state of the system might be unknown. In addition, the agent is not 100% sure that the action it performs will lead to the result it expects. In the case of communication, it also means that the agent does not know exactly the state of knowledge of the other agent. After all, maybe Bob looked at the weather forecast and knows that it’s going to rain…
  2. Model for task and communication planning. This part concerns the planning with communication aspect. How can we add communication actions as a valid way to reach a goal?
  3. The receiver’s point of view. We haven’t talked much about this part yet, but the idea is to take into account the point of view of Bob: how should Bob integrate the communication coming from Alice? How does this communication change Bob’s state of knowledge and future actions?

I will go more in details in the next articles of this series and I hope this one gave you the motivation to come back later for more!


  1. Which is very probably as it is not very well known yet.
  2. Which is not common at all. I promise we have very sunny summers :P
  3. The problem of A knowing that B knows will be the topic of the second article of the series. For now, we just consider that this is the case.
  4. This related to another aspect that I heard about recently during a talk in which I presented this work and which seems to be around in the Machine Learning community, which is the difference between “interesting information” and “useful information”. Depending on the information delivered and how it is delivered, it can be interesting and useful or interesting and completely useless.
  5. To this regards, it has some ideas in common with some task repartition techniques in multi-agent systems (such as Contract Net Protocol, some Auction systems… I might talk about these another day.)
  6. Which is my main working hypothesis.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.