Creating Dynamic Story Plots with Continual Multiagent Planning
Michael Brenner
Albert-Ludwigs-Universit¨at
Freiburg, Germany
brenne r@informatik. uni-freiburg.de
Abstract
An AI system that is to create a story (autonomously or in in-
teraction with human users) requires capabilities from many
subfields of AI in order to create characters that themselves
appear to act intelligently and believably in a coherent story
world. Specifically, the system must be able to reason about
the physical actions and verbal interactions of the characters
as well as their perceptions of the world. Furthermore it must
make the characters act believably–i.e. in a goal-directed yet
emotionally plausible fashion. Finally, it must cope with (and
embrace!) the dynamics of a multiagent environment where
beliefs, sentiments, and goals may change during the course
of a story and where plans are thwarted, adapted and dropped
all the time. In this paper, w e describe a representational
and algorithmic framework for modelling such dynamic story
worlds, Continual Multiagent Planning. It combines contin-
ual planning (i.e. an integrated approach to planning and ex-
ecution) with a rich description language for modelling epis-
temic and affective states, desires and intentions, sensing and
communication. Analysing story examples generated by our
implemented system we show the benefits of such an inte-
grated approach for dynamic plot generation.
Introduction
To tell a story is a challenging task that involves many (if
not most) aspects of human intelligence. If the storyteller
is an AI system it must effectively simulate a cohere nt story
world and control a number of virtual character s in a man-
ner that seems believable to humans, much as in the Turing
Test. Thus, creating believable stories, whether interactively
or not, can be considered an AI-complete” task and requires
methods from many subfields of AI, e . g., p la nning, virtual
agents and multiagent systems, reason ing about beliefs and
intentions, affective computing, and d ia logue systems.
Among these me thodologies, planning has probably re -
ceived the most attention in story generation (Meehan 1977;
Lebowitz 1985; Riedl and Young 2004; Si, Marsella, and
Riedl 200 8). Its relevance results from the structural simi-
larity between plans and plots, both of which describe tem-
poral and causal relations between events. Indeed, temporal-
causal coherence, as modeled by the semantics of classical
STRIPS-like planning formalisms, can be co nsidered a nec-
essary condition for stories (at least n on-postmodern ones).
Copyright
c
20
10, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
Yet, Planning research follows a different research agenda
than Narrative Intelligence and ther efore h as developed rep-
resentations and algorithms that are o nly of limited use to
plot creation. As a result, while pla ns are used in stor y-
telling systems, many interesting aspects of narrative, e. g.,
motivation and emotion, must be h andled outside the plan -
ner. While this is fine in itself, it prevents long-time plotting,
usually done by a planner, from taking these aspec ts into ac-
count, e. g. say, planning for motivation changes based on
emotional reactions to events. T herefore, in this work, we
try to in tegra te ideas from many of the AI fields mention ed
above directly into the planning formalism and algorithm to
make it direc tly suitable for narrative gen eration.
The paper is structured as follows. We first review rele-
vant work in fields outside nar rative research. Then we de-
scribe a planning formalism that integrates many of these
relevant aspects. We then describe a plann ing algorithm us-
ing this representation, Continual Multiagent Planning, and
briefly present our implementation. Analysing a story gen er-
ated by our program we discuss the benefits of our approach.
We conclude with a discussion of its relation to previous
work in narrative generation and its possible future uses and
extensions.
Related Work I
Our work integrates ideas from several subfields of AI,
in particu la r classical and distributed plann ing, multiagent
systems, knowledge representation and reasoning (mainly
about perceptions, beliefs, and emotions) and dialog ue sys-
tems. Due to space limits, we can only discuss few prototyp-
ical inspirations here. At the end of the paper, we will relate
our approach to previous work in storytelling resear ch.
Most stories feature several characters and can thus be re-
garded as multiagent environments. To model the belief s of
different agents (and their reasoning about each othe r) we
will integrate multiagent epistemic modalities into the plan -
ning representation (Fagin et al. 1995 ). A dditionally, simi-
larly to BDI models of m ultiagent cooperation, we will ex-
plicitly model the desires and intentions of different agents
(Grosz and Kraus 1996). In order to describe how chara c-
ters gather new information, we will ne ed to model commu-
nication and perception as well. Here, we are inspir ed by
approa ches to collaborative dialogue (Lochbaum 1998) and
planning with sensing (Petrick and Bacchus 2002).
1517
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10)
Algorithmic ally, our work is based on the intuition that
the dynamics of plots are ha rd to describe naturally with a
single plan. Often, plots depend on plans failing or being
thwarted, then being dropped or adapted, and finally succeed
or fail (where which is which often lies in the eye of the be -
holder), i.e. as a series of planning and execution cycles per-
formed by the characters. Therefore, what w e develop is a
Distributed Continual Planning (DCP) method (DesJardins
et al. 1999; Brenner and Nebel 2009).
Modelling Story Worlds in a Multiagent
Planning Formalism
Story worlds usually are multiage nt environments. To de-
scribe plausible behaviour in such worlds, we need to rea-
son about their dynamics. Thus, we must be ab le to repre-
sent not only the physical actions that agents can pe rform,
but also their perceptual and communicative capabilities as
well as their (mutual) beliefs and (joint) goals. To do this in
a domain-independent fashion, we use a formal descripition
language, the Multiagent Planning Language MAPL (Bren-
ner and Nebel 2009). In this section, we present MAPL in-
formally and discuss the extensions made for this paper.
MAPL is a multiag ent variant of PDDL (Planning Do-
main Definition Language ), the de facto standard langu age
for classical planning. In stead of prop ositions, MAPL uses
multi-valued state variables (MVSVs). For example, a state
variable color(ball) would have exactly one of its possible
domain values red, yello w, or blue, as compared to the three
semantically unrelated propositions (color ball red ), (colo r
ball yellow), (color ball blue). MVSVs have successfully
been used in classical p la nning in recent years,but they also
provide distinct benefits when representing stor y worlds: a)
Incomplete knowledge can be expressed by adding an un-
known value to the dom ain of a MVSV, b) this idea can be
extended to mode l beliefs and mutual beliefs among char-
acters (Fagin et al. 1995) c) knowledge-seekin g actions
can be modelled as supply ing a yet unknown MVSV value.
Thus, sensing and communicative acts are modelled like wh-
questions (what colour? wher e? etc.), d) due to the mutual
exclusivity of MVSV values, they are we ll suited for Initial
State Revision (Riedl and Young 2006).
In addition to the usual preconditions and effects, MAPL
actions have a controlling agent who executes the action.
MAPL a ssumes that the controlling agents ar e fully au-
tonomous when executing actions, i. e. there is no extern al
synchro nization or scheduling component. Consequently,
an action will only be executed if, in addition to its pre-
conditions being satisfied, the controlling agent kn ows tha t
they hold. Implicitly, all MAPL actions are extended with
such knowledge preconditions. Similarly, implicit com-
mitment preconditions describe, intuitively, tha t if action a
controlled by agent A is inc luded in agent Bs plan, this can
only be done if A has agr eed to perform a.
MAPL models three different ways to affect the beliefs
of ag ents: sensing, copresence, and communication. Sensor
models describe under which conditions the current value of
a MVSV can be perceived. Copresence models are m ultia-
gent sensor models that induce mutual belief about the per-
ceived state variable. Informally, agents are copresent whe n
they are in a common situation where they can not only per-
ceive the same things but also each other. Communicative
acts in MA PL include d eclarative statements, que stions, re-
quests, and acknowledgments. While de clarative statements
change the belief state of a nother a gent similarly to sensory
actions, th e other types of commu nicative acts affect aspects
of the agent that are typically considered static in AI Plan-
ning, namely the goals of agents.
MAPL goals are first-order formulae, like in PDDL. For
storytelling we mostly use them in a specific conditiona l
form: By in troducting a new MAPL keyword, “currently”,
we can refer to th e current state of the world and the agents’
beliefs a bout it in such a conditional goal formula. MAPL
also has temporary subgoals (TSGs), which m ust be sat-
isfied at some point in the plan, but may be violated in
the final state. TSGs are represented as explicit symbols
in MAPL and thus can be reasoned a bout by a planner.
In particular, they can be active or inactive. This is also
true for conditional goals, whose antecedent (condition) may
hold in the current state or not. Both kinds of goal activa-
tion mim ic how commitment turns desires into intentions in
BDI models of human practical reasoning (Bratman, Israel,
and Pollack 1988; Cohen and Levesque 1990). Through-
out, the paper we will often refer to activated goals as inten-
tions. Assertions are counterfactual statements, e. g., “If I
knew where to find a sword, I could slay the dragon”, that
the continual planner may use as temporary subgo als in or-
der to gather missing info rmation necessary to achieving its
main goal. Assertions enable the agent to postpone plan-
ning for subproblems until it has gained more knowledge,
i. e. by partially executing a p la n and the n switching b ack to
more d etailed planning. Thus, assertions encourage pro ac-
tive goal-driven info rmation gathering (Brenner and Nebel
2009), which for plot generation o ften seems to be a d esir-
able cha racter trait.
MAPL plans are partially ordered, using different kinds
of causal links. This is advantageous for plot generation
because plans provide explanations for the behaviour of the
characters. In contrast to othe r p la n-based appr oaches we
will not use plans directly to represent the whole plo t. Since
during a continual planning episode usually multiple plans
are being genera te d, executed, and revised, we consider as
the plot the execution history of the episode, ann otated with
relevant (possibly false) beliefs and goa ls. This plot graph
comprises a totally ordered “fabula”, i. e. the sequence of
events that occur. Yet, it also uses explanations from plans
and plan monitoring to relate actions to each other by various
types of causal links. Such causally annotated histories can
be naturally regarded as plots in the sense of E. M. Forster
(Forster 192 7) and provide ample information for discourse
generatio n, i. e. the presen ta tion of the plot.
Continual Multiagent Planning
How can we generate MAPL p lot graphs? The method
presented in this section, Continual Multiagent Planning
(CMP), is a distributed alg orithm, i. e. it de scribes planning
by mu ltiple agents, who all have different motivations and
beliefs about the world. Being f ully distributed, it can be a p-
1518
plied to large interactive environments, e. g., mu ltiplayer on-
line role-playing games. However, it also models planning
for multiple agents, since even the individual agents’ p la ns
may involve several agents if that is nece ssary to achieve her
goals. For storytellin g, this me ans that CMP allows for bo th
character-centric a nd author-centric plot ge neration.
The CMP algorithm is shown in algo rithm 1 (its subpro-
cedures will only be presented informally) . CMP extends
Continual Collaborative Planning (CCP), an appro ach de-
veloped in the con text of situated human-robot interaction
(Brenner and Nebel 2009). Like in CCP, CMP agents de-
liberately switch between planning, (partial) plan execution,
monitoring, plan adaptation and communication. However,
CMP agen ts are not required to be benevole nt and always
willing to adopt any TSGs p roposed by other agents luck-
ily, since this would prevent conflict, intrigue and drama in
the generated plots.
Algorithm 1 CMP AGENT(S, G)
P =
Received no message:
if S satisfies G do
return “goal reached”
else
P = MONITORINGANDREPLANNING(S, G, P )
if P = then
return “cannot achieve goal G
else
e = EXECUTENEXTACTION(S, P )
(S, P ) = STATEESTIMATION(S, e)
Received (tell-val vx) from agent a:
add v
.
=x to S
Received request(sg) from agent a:
if cooperative(self, a) 6∈ S then
send “will not adopt request sg to a
P = MONITORINGANDREPLANNING(S, G sg, )
if P = then
send “cannot execute request sg to a
else
add sg to G as temporary subgoal
send “accept request sg to a
When used for cen tralised planning (i. e. one planner c on-
trols all agents) or when n o communication is taking place,
a CMP agent alternates between (re-)planning and execu-
tion. Subprocedure MONIT ORINGANDREPLANNING first
determines whether a new planning phase should be trig-
gered, either because the agent has sensed an external event
that has invalidated its previous plan, or because her goals
themselves have changed, or bec ause of an assertion that
was previously used to advance plan ning despite missing in-
formation and whose d etailed planning is now trigg ered be-
cause additional knowledge has become available (Brenner
and Nebel 2009 ). If, for any of the above reasons, plannin g
is triggered the a gent replans for those parts of its plan that
are no longer valid. The details of the actual (re)planning are
irrelevant for the purpose of this paper (any state-of-the-art
PDDL planner may be adapte d for the purpose); it results in
an asynchronous MAPL plan that specifies actions fo r (pos-
sibly) several agents and the causal and te mporal relation
between them necessary f or achieving the planning age nt’s
goal. If the plan involves other agents than the planning
agent or those characters she can directly control, the new
plan must ensure that they are commited to the (sub)goals
their actions contribute to. I n the original CCP the new plan
would have included maximally one negotiate plan( a) ac-
tion for each agent a appearing in the plan, since a ll agents
were supp osed to be cooperative and their exact role in the
plan could freely be discussed in the negotatio n phase. This
is different in CMP, where the planning agent must consider
different options fo r m aking a commit to a subgoal. This
can either be done by negotiation as before, if a is known to
be cooper ative, or by some form of persuasion, i. e. indirect
activation of a conditional goal o f a. For example, a hu nter
may present a bear with a honey comb to raise its appetite
and m ake it walk into a trap. Indirect activation may also
be recursive, e. g ., when a bank robber r threatens a ba nk
clerk c, thereby making coo perative(c,r) true and thus make
c open f or “req uests” in the next step.
As soon as a CMP agent has found (or repaired)
a valid plan it enters the execution phase (function
EXECUTENEXTACTION). First, an action e on the first level
of the plan, i. e. one whose preconditions are satisfied in
the current state, is chosen non-deterministically. If the ac-
tion is executable by the CMP agent himself (this includes
communicative actions), it is executed. If not, the p la nning
agent tries to determine whether the action was executed
by its controlling agent, i. e. it actively observes changes in
the environment relevant for its plans. In both cases, the
CMP agent will try to update its knowledge about the world
state based on the expected effects and the actual perceptions
made (function STATEESTIMATION).
Implementation In our view plots do not only con-
sist of plans, but also of their execution, and the resulting
re-evaluation of beliefs, goals, and plans by all character
agents. Such an approach can best be implemented in a sim-
ulation environment. Th is is most obvious in interactive nar-
rative, where som e characters are not c ontrolled by the sys-
tem, but by human users. Yet simulation is also a convenient
way to compu te the complex results of several characters
acting simultaneously in a common environment, observing
and influencing each other constantly, even if con trolled by a
single “author”. Therefore we have implemen te d MAPSIM,
a software environment that automatically generates mul-
tiagent simu la tions from MAPL domains. In other words,
MAPSIM interprets the MAPL domain both as the planning
domain for each CMP character, but also as an executable
model of the environment, so that it can determine the re-
sults of the execution of the characters’ actions.
Note that while for g enerating the sto ry a nalysed in the
following section, we invoked MAPSIM non-interactively
to emphasise the autonom y of the approach, MAPSIM c an
also be used interactively. Huma n users may “play” the
role of any character and send MAPL commands to th e
simulation directly.
1519
Figure 1: A story in the Quests domain non-interactively
created by MAPSIM.
1 This is a story about Smaug, King Arthur and Prince
Valiant.
2 King Arthur was in the castle. 3 The trea sure was in the
cave. 4 King Arthur rode to the cave. 5 King Arthu r saw
that Smaug was in the cave.
6 King Arthur rode to the ca stle. 7 King Arthur saw that
Prince Valiant was in the castle. 8 ’Please br ing me the
treasure, Prince Valiant, King Arthur said. 9 ’As you
wish, Kin g Arthur, Prince Valiant replied. 10 ’Where is
the treasure, King Arthur?’ Pr ince Valiant asked. 11 ’The
treasure is in the cave, Prince Valiant, King Arthur said.
12 ’Thank you, Prince Valiant said .
13 Prince Valiant rode to the cave. 14 Prince Valiant saw
that Smaug was in the cave. 15 Smaug tried to k ill Prince
Valiant - but failed! 16 Prince Valiant saw that Smaug was
not dead. 17 Prince Valiant killed Smaug.
18 Prince Valiant took the treasure . 19 Prince Valiant rode
to the castle. 20 Prince Valiant gave King Arthur the trea-
sure. 21 ’Thank y ou for brin ging me the treasure, Prince
Valiant, said King Arthur.
22 King Arthur and Prince Valiant lived ha ppily ever after.
Smaug did not.
Analysis of a Worked Example
In order to show the benefits of our integrated approach , we
will now analyse a story generated by M A PSIM. It is repro-
duced in figure 1. During the creation of the story a total of
20 different plans were created by the three chara cters and
the simulation itself. On a 1.6 GHz Intel Pentium and 1 G B
RAM the total pla nning time was less than 500ms.
As input, MAPSIM was given a formal MAPL domain
description and individual goals and beliefs for each of the
three charac te rs as well a s the true initial world state. The
resulting plot graph is (for reasons of space) only shown
partly in figure 2. It roughly correspon ds to lines 9–15 of
figure 1 and gives an impression of the MAPL representa-
tion of the plot. The first and last line of figure 1 have no
direct correspondence in the p lot graph, but are directly pro -
duced by MAPSIM: In line 1 the characters are introduced,
whereas in line 22 MAPSIM reports on which o f th e agents
have achieved their goal and which have not.
1
Multimodal interaction Note first that characters’ be-
haviour as generated by CMP seamlessly interleaves phys-
ical action, sensing, and co mmunication, e. g. in lines 6–
8. Due to the explicit representation of epistemic states
and in formation -gathering actions, the characters will plan
1
Obviously the textual output could be vastly improved, e. g.,
by proper generation of referr ing expressions. This output was gen-
erated using action-specific English templates that the MAPL do-
main can be annotated with. This way, plot graphs from arbitrary
domains can quickly be rendered into a readable natural-language
output.
which gaps in their knowledge they need to fill in order to
further detail their plans. This may result in active observa-
tion (as in line 5, where Arth ur check s whether the cave is
empty) or in in formation -seeking subdialogues (as in lines
10–12).
Plan dynamics When Arthur arrives at the cave, he ob-
serves that the dragon, Sma ug, is there. Arthur knows that he
cannot take the treasure while the dragon is present. Thus,
CMP detects, in its monitoring phase, that Arthur’s plan has
become invalid. Arthur generates a new plan, this time a
multiagent plan in which Valiant is supposed to help him
get the treasure. Switching to the new plan, Arthur leaves
the cave and returns to the castle. We cla im that it would be
quite difficult to describe the plot so far with a single plan , let
alone generate it with a single planner run. Continual plan-
ning, on the o ther hand, seems like the natural way to model
how a character reacts to the ob stactles she enco unters.
A form of proactive continual planning is exemplified in
lines 8-13. Prince Valiant initially does not know the lo-
cation of the treasure. Thus he could normally not find a
plan to get it and therefore would have to decline Arthur’s
request. However, the planning doma in contains a counter-
factual assertion stating, informally: If I knew where the
treasure was, I could make a plan to br ing it somewhere
else”. Using this assertion, Valiant is able to deliberately
postpone part of the planning process and first engage in the
short subdialogue of lines 10– 12 in order to gather the miss-
ing information (Brenner and Nebel 2009). The semantics
of assertions is such that, when the missing information be-
comes available, a new planning phase is triggered. It pro-
vides Valiant with a more detailed plan which he executes
until he also encounters Smaug an d must again extend the
plan to include fig hting the dragon. In a different setting
(“If I knew where the grail was...”) satisfying the re planning
condition of the assertion, i. e. closing the knowledge gap,
may be the more complex part and co nstitute most of the
resulting plot.
Goal dynamics The standar d use of continual pla nning
is to adapt plans to changing conditions in the outside world
or an agent’s belief about it. However, in real life (and
thus in stories) motiva tions change, too . CMP agents can
adopt temporary subgoals, e. g., when ac cepting a request
by a nother ag ent, as in lines 9 and 12 of figure 1. Such
changing goals usually lead to more su bstantial ch anges in
the plot than mere plan a daptation for the same goal. Only
after Arthur’s request (line 8), Valian t gets involved in the
story at all. In particular, this pro mpts a nice example of
mixed-initiative behaviour (line 10), where Valiant immedi-
ately asks back to get more information ne cessary to achieve
his new goal.
Characterisation by affective goal activation As noted
above, changes in an agent’s goals may lead to substantial
changes in her behaviour. Indeed, it can be argued that a
character is better characterised by her motivations than her
(fairly volatile) current state of knowledge. However, a com-
plex character has many m otivations. Depending on her in-
ternal (or some external) context, motivatio ns may be ac-
tive (i. e. they drive her current b ehaviour) or inactive. Such
context-dep endent goal activation allows for a more fine-
1520
Figure 2: Plot graph (excerpt) for the Quest story generated by MAPSIM. Legend: temporal link s (bold black), causal links
(black), th reat prevention links (blue), false belief links (red ), TSG ach ievement (olive box). For clarity, causal links with facts
that are true at the beginning have been omitted.
grained cha racterisation. e. g., in our story, the dragon will
only want to kill huma ns whe n they have entered its lair.
Using MAPLs currently keyword, we can r efer to the
current state in a goal formula and describe the conditions
for goal activa tion and deactivation.Several such conditional
goals can be defined for a character. Together they can de-
fine a multi-faceted personality whose concrete intentions
may change depending on the curr ent situation, but who will
show consistent behaviour in similar situations.
It is important for storytelling tha t the conditional goals
characterising an agent can refer to emotions or mental
attitudes directed towards other agents and objec ts, e. g.,
angry(a), loves(a,b), etc. For example, if the dragon
only attacked in truders when an gry, but was willing to share
the treasure when happy, another story might tell how Princ e
Valiant c harmed the dragon and thus could acquire the trea -
sure without violence. This also opens CMP for use in af-
fective storytelling (Pizzi and Cavazza 2007).
Beliefs, desires, intentions Thro ugh MAPLs commit-
ment preconditions CMP enforces characters to comm it to
a goal/desire before they can actively pursue it, i.e. make
it an intention first (Bratman, Israel, and Pollack 1988;
Cohen and Levesque 1990). In the multiagen t case this
means that if ch aracter A is not directly controllable by an-
other agent B, B must fir st someh ow persuade A to commit
to a new goal before B can assume As working towards it.
In our story, Arthur knows that he c annot “use” Valiant in his
plan to get the treasu re unless Valiant commits to that goal
himself, i. e. makes it an intention of his own. Here, CMP
finds a plan for Arthur to achieve this using a simple request
(lines 8–9), since in the MA PL de scription Valiant has been
modelled as being cooperative towards Arthu r. On the other
hand, bef ore CMP could include actions of the dragon into
Arthur’s plans, it would first have to indirectly ac tivate one
of the dragon’s own desires.
False beliefs are importa nt for plots, as they result in
misunderstandings, in misguided or unsuccessful behaviour.
Again, c ontinual planning can be used to reason about the
conseque nces of believing something wrongly. To show
this, the example domain is set up such that the “stronger”
character always wins a fight. Here, Smaug falsely believes
to be stro nger than Prince Valiant and attacks him (line 15),
which eventually leads to his own death.
Chekhov’s gun The plot graph describes which facts
and objects are used in the plo t. Those facts should be men-
tioned so that the reader/player can follow the reasoning of
the characters. Crucially, the plot grap h does not only point
to precond itions o f actions that character s actually p erform,
but also to those beliefs n ever used in an executed plan (be-
cause of the plan or the belief becoming obsolete first), but
that are necessary to explain the changing motivations and
plans of th e characters.
Related Work II
Having presented our approach to planning for storytelling,
we can finally relate it to existing storytelling systems
(again, only few representative ones). It should be kept in
mind, though, tha t we do not claim this to be a full sto-
rytelling framework, but only to provide planning repre-
sentations and algorithms appropriate for be ing used inside
such a system. MAPSIM is only a research prototype and a
domain-independent testbed to evaluate variants of CMP.
1521
CMP, when used by individual agents in a multiage nt sim-
ulation like MAPSIM, works as a character-centric appr oach
to storytelling (in contrast to author-centric a nd story-centric
approa ches (Mateas and Sengers 1999)) in the trad ition of
Meehan’s Tale-Spin (Meehan 1977). However, since the
intentions of several char acters are reasoned about explic-
itly and individually in each plan , it also integrates aspects
of author-centric approaches. In this respect, our ap proach
seems closest to Fabulist (Riedl and Youn g 2004). When
CMP is used in the context of a simulation its capabil-
ity to delibera te ly devise plans that may fail and to reason
about dynamic g oals of characters makes it quite suitable to
be used for dynamic, interactive storytelling like E mergent
Narrative (EM) (Aylett 1999).
Thus, MAPL and CMP integrate features of both author-
centric and character-centric approaches. It would be of
great inte rest to evalua te their use in Interactive Storytelling
frameworks that strive for a similar integration, e. g., (Si,
Marsella, and Riedl 2008; Porteous and Cavazza 2009).
Discussion
The main contribution of this article is an integration of
models and methods from a number of AI subfields into
a representational (MAPL) and algorith mic (CMP) frame-
work that is well-suited for an AI-complete” task like sto-
rytelling. Providing both a rich representation language
for reasoning ab out multi-characte r environments and a dis-
tributed algorithm for planning by multiple agents, it com-
bines aspec ts of both ch aracter-centric and author-centric
approa ches to storytelling. A specific emphasis has been
put on enabling the g eneration of dynamic plots, in which
beliefs, motivations and character traits may change. We
believe that, due to a decade-long focus on plan ning as an
isolated one-shot proble m, these dynamics have been in suf-
ficiently studied in both planning a nd storytelling research—
despite the fact that plot twists and character development
are ofte n said to be what makes a story interesting.
Interestingness or, more technically, plot quality is not an
explicit concept anywhere in our approach. Thus it is not
surprising that we cannot reliably claim that our approach
will generate interesting stories. To this end, we will h ave
to extend the approach by a n author or, even b etter, a reader
model. In future work, we will investigate how a CMP au-
thor agent can try to achieve plot goals by means of initial
state revision (Riedl and Young 2006) and late commitment
(Swartjes and Vromen 2007), concepts in spired by game-
mastering in pen-and-paper roleplaying games. CMP sup-
ports these ide as almost directly, because it c an reason about
possible MVSV values th at have not yet been sensed by a ny
of the characters. In further work, we will consid er reader
agents who follow the unfolding of the plot and assess it
using subjective, externally defined quality metrics. Given
such a metric, we can iteratively use the author agent to gen-
erate series of slightly modified storie s, i. e. w e can mimic
the process of story revision in a form of local search in the
space of plot graphs.
At first glance, creating believable interac tions between
characters in a story world may seem remote from more
“serious” AI topics like robotics. However, this work was
inspired by our research on human-robot collaboration and
will, in turn, most certainly find itself being integrated into
robots again now.
Acknowledgments
This re search was supported by the EU as part of th e Inte-
grated Project CogX (FP7-ICT-2x o15181-CogX).
References
Aylett, R. 1999. Narrative in virtual environments - towards emer-
gent narrative. In Proc. AAAI N arrative I ntelligence Symposium.
Bratman, M. E.; Israel, D. J.; and Pollack, M. E. 1988. Plans and
resource-bounded practical reasoning. Computational Intelligence
4:349–355.
Brenner, M., and Nebel, B. 2009. Continual planning and acting in
dynamic multiagent environments. Journal of Autonomous Agents
and Multiagent Systems 19(3).
Cohen, P. R., and Levesque, H. J. 1990. Intention is choice with
commitment. Artificial Intelligence 42(213–261).
DesJardins, M.; Durfee, E.; C. Ortiz, J.; and Wolverton, M. 1999.
A survey of research in distributed, continual planning. The AI
Magazine.
Fagin, R.; Halpern, J. Y.; Moses, Y.; and Vardi, M. Y. 1995. Rea-
soning About Knowledge. MIT Press.
Forster, E. M. 1927. Aspects of the Novel. San Diego: Harcourt.
Grosz, B. J., and Kraus, S. 1996. Collaborative plans for complex
group action. Art ificial Intelligence 86.
Lebowitz, M. 1985. Story-telling as planning and learning. Poetics
14:483–502.
Lochbaum, K. E. 1998. A collaborative planning model of inten-
tional structure. Computational Linguistics.
Mateas, M., and Sengers, P. 1999. Narrative intelligence. In Proc.
Narrative Intelligence Symposium.
Meehan, J. R. 1977. Tale-spin, an interactive program that writes
stories. In Proc. IJCAI.
Petrick, R., and Bacchus, F. 2002. A knowledge-based approach to
planning with i ncomplete information and sensing. In Proc. AIPS-
02.
Pizzi, D., and Cavazza, M. 2007. Affective storytelling based
on character’s feeling. In Proc. AAAI Symposium on Intelligent
Narrative Technologies.
Porteous, J., and Cavazza, M. 2009. Controlling narrative gener-
ation with planning trajectories: the role of constraints. In Proc.
ICIDS.
Riedl, M. O., and Young, R. M. 2004. An intent-driven planner for
multi-agent story generation. In Proc. AAMAS.
Riedl, M. O., and Young, R. M. 2006. Story planning as ex-
ploratory creativity: Techniques for expanding the narrative search
space. New Generation Computing 24(3).
Si, M.; Marsella, S. C.; and Riedl, M. O. 2008. Integrating
story-centric and character-centric processes for authoring interac-
tive drama. In Proc. AIIDE.
Swartjes, I., and Vromen, J. 2007. Emergent story generation:
Lessons fr om i mprovisational theater. In Proc. AAAI Fall Sympo-
sium on Intelligent Narrative Technologies.
1522