🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Emergent Properties

Started by
14 comments, last by Shinkage 23 years, 10 months ago
This seems to me to be a vastly overlooked aspect of the game design process. Everybody wants to design to the last detail how every aspect of their game will work, but that seems to be totally ignoring the possibility of allowing for any kind of significant emergent properties. To be precise, I define emergent properties as unexpected results of the interaction of generally simple properties through a particular system. Properties that are, to coin a phrase, seemingly more than a sum of their parts. One of the most astounding and least understood emergent phenomena being that of human sentience. Although it seems like a powerful tool, I am at a loss for how to account or even allow for it in the design process. Anybody have any ideas on how we might be able to take this peculiar phenomenon into account when designing games?
Advertisement
Cool topic! I''m working on this sort of thing right now (see the ''Random Races'' and ''Dynamic empires'' posts; also the ''Monster Generator'' thread seems topical) :>

First off, note that one of the most obvious problems you get with emergent anything is control of the game environment. What''s going to be there? How are you going to test for it? Is it going to be good enough? Also, how long does this emerging take before you get something useful?

These questions have show stopping implications for any design, particularly if you''ve got a lot of cash riding on it. Normally it''s better to add preset elements that can reliably developed, debugged, and tested.

I think if you want to make this work, however, you''ve got to do something pretty radical: Since you can''t guarantee the user''s experience, you''ve got to give up on guaranteeing the user''s experience!!! That is to say, if gameplay features, or enemies, or experiences, or even levels themselves are all emergent, you can''t reliably predict what the user will find. If this is the case, then you may not be able to be sure that the user will not be frustrated, or annoyed, or even kept from winning.

The solution? For me, it''s a mix of toy and game. Games have to let you win, but toys only have to entertain you. I have a lot of (plans for, anyway) random creatures, societies, and empires in my design. The empires, particularly, have profound implications for gameplay, as they change inventory, levels, characters, and the entire game environment. So in those areas where you''re expected to win (combat''s a good example), I''m building in predefined rules. But in areas where it doesn''t matter (say, finding new societies) I''m working on a self-organizing, emergent system that''s interesting to watch and interact with.

Funny enough, game features are somewhat emergent right now, and it''s a QA problem. Take the "Rocket Jumping" feature found in some FPS games. Initially it was unexpected behavior (jump + rocket''s knock back effect), and you could get into areas you weren''t supposed to; the level was then changed. But a more interesting approach would be self reorganizing levels...

--------------------
Just waiting for the mothership...
--------------------Just waiting for the mothership...
In something (like computer programming) that lends itself so well to reduction, I fail to see how you can bring holistic ideals into it... but if anyone has any ideas, I''d be glad to see it. Here''s my theory on the possibility (but I''m a little skeptical).

You see, in order to have something like you suggest, where two basic, defined, things can combine to form a new, undefined thing... you must have some set of rules defining the paths that those basic things can take to form the new things. These sets of rules are defined from outside the system, and as a result, the new things that are formed are not really undefined... since you can look at it from outside the system and determine every possible thing by simply taking all the defined things (axioms) and running them through all the rules (functions). (There are a finite amount of axioms possible).

Now, to combat this, you might want to allow the (in this case,) computer to form its own functions. That way, there can be infinitely new axioms since new functions are being made to create new axioms. The only problem with this is, in order to put the computer in a meta-state (having the ability to alter its own rule-set), you must give it rules to follow while in this meta-state. Thus, you will still not have an infinite amount of axioms, since the function-set is still finite.

To learn more about axioms and functions, read some of Principia Mathematica (don''t read the whole thing... it''s too boring). Principia Mathematica was an attempt by 19th century mathemeticians to create a formal system that could be used to describe anything (and formulate its truth-value) without using self-reference. Self reference was early realized to be the boon of any system (As far back as the Greeks; see Epimenedes paradox -- "This statement is false."). Principia Mathematica was an extremely powerful system; unfortunately, it was debunked by Godel (two dots over the ''o'').

Godel not only proved Principia Mathematica to be flawed, but he also showed that any formal system, assuming it was powerful enough, is either 1)incomplete, or 2)contradictory. He used some techniques from Quine, and some from (?)''s diagonal method, but his theory was mostly due to his method of "Godel Numbering"... the proof of how he debunked every system is far too lengthy to put here (it would take a couple of dozen pages, at least) but if you are interested, read Godel, Escher, Bach, an Eternal Golden Braid by Douglas Hofstadter (an excellent book with LOADS of information on everything, even though it was written in 1979 -- it won a Pulitzer Prize).

Now, why am I going on and on about this? It has a point, trust me... Say we wanted to stop making meta-rules, and meta-meta-rules... and we just wanted to create a rule set that would allow for ALL things possible; that is, all things that are valid truths and non-truths... and sort them out after they are created. Well, due to Godel''s Theorem, any rule set we make will inheretly be incomplete (so we''ll have not only a finite amount of axioms, but we''ll also be missing some possibilities) or we''ll be contradictory (so we''ll have things that counter-act the existence of each other, and our rule set will fail)

Okay, so maybe it doesn''t matter that we don''t have the possibility for EVERYTHING in our system... but unless we''re going to have very complex interactions between the objects in our world, we don''t really have the need for a rule set to define those interactions, and as such we might as well just clearly define those interactions beforehand.

Maybe I misunderstood your proposal, and you were merely asking for a way to define these interactions beforehand, and not worry about its expandability. In that case, just consider the past few paragraphs a view of how complex simple rule systems can be. Look into the axiom->function system, and some advanced logic... and it shouldn''t be hard at all to set up a system that lays out possible interactions between not just two objects, but any number of simultaneous objects.

"Properties that are, to coin a phrase, seemingly more than a sum of their parts"
--> Hate to bum you out, but the phrase has already been coined. In fact, holism is the belief that things are "more than a sum of their parts". That things like human consciousness cannot be defined as the chemical and electrostatic interactions in our grey matter. Reductionists take the opposing view, that all things are merely the sum of their parts... that the only reason we don''t have a fully working vision of how our brain works on the lowest level is simply because we don''t yet have enough information about the lowest level yet. I''ve come to accept that you can''t have either. And you can''t have both at the same time. And we''ll never fully know.

"One of the most astounding and least understood emergent phenomena being that of human sentience."
--> Well, humans actually know a lot more than you think about human sentience, and more importantly, human consciousness. If you want to learn more, read The Conscious Mind by David Chalmers. Godel, Escher, Bach, an Eternal Golden Braid also has a wealth of information about it, but The Conscious Mind is much more recent, and deals more with the metaphysical side of it.

As an answer to the "how do you make a rule set that will successfully avoid being enclosed by finite bounds, while avoiding self reference?", (Hofstadter had no views on this... nor did any other authors/scientists that I know of) here''s my conclusion after finishing Godel, Escher, Bach; and doing a little more research... Configure three computers such that one alters the rules of two, which alters the rules of three, which alters the rules of one. These computers can refer to three seperate machines, or just three processes/threads on the same machine... but I want to emphasize that I''m trying to avoid any ability to reference your own rule set (thereby eliminating the meta-rule set needed to do so). Each computer''s own rule set consists of two parts: manipulating its own data (or axioms), and manipulating the rules of the computer immediately after it. Since none of these computers directly affect themselves, or even directly affect computers that affect themselves, then there is no meta-level to consider... there is only the level of the computer that exists before in the chain.

Of course, Godel''s Theorem would dictate that I''ll still end up missing some truth-statements through this process... but I''m not too worried, since 1)It''s a project that I will never have any time to finally finish, 2)It''s not half as fun as creating a game (even if the theory did eventually lead to advancing someone else''s work in AI -- (real AI is something that I have very very rigid theories about, feel free to ask me about them)) and 3)I''m sure I''ve got some holes in the procedure that need to be closed up (but it''s an idea).

Again, if anyone else has any ideas, I''d love to see them.

Sorry I danced all around the topic and didn''t really stay on it much at all.
Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp."
After reading wavinator''s response, I feel like maybe I''ve stuck too much to your idea of the mind being an example of the kind of emergent phenomena that you wanted to imitate.

When it comes to the things that Wavinator discussed, I tend to think of them more as "dynamic features", not "emergent phenomena". Dynamic features can easily be implemented using psuedo-random values and loose rule sets.

Truly convincing emergent phenomena that draw their source from inside an application, and rely on the code written in that application, are what you would use the things I discussed for.

Things such as rocket-jumping, though they can be described as emergent, draw their source from the humans that interact with the application, not the applications themselves (one more defense for holism in the human mind/conciousness).

I call these things simply unexpected behaviors. But you can call them whatever you want.
Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp."
Unexpected things arise just as easily from very detailed documents as very vaugue ones. Errors in logic and incorrect details tend to give way to creative review, and any good designer is open to criticism. This makes a better end result.

You can''t expect the unexpected, because it might not show up. In that event, I''d rather have a detailed document to work with. Emergent propertoes, however, are priceless, and good designers need to know how to make room for that really good idea in a budget and a timeline. They also need to know when it WON''T wirk, and so file it away for later use...

======
"The unexamined life is not worth living."
-Socrates

"Question everything. Especially Landfish."
-Matt
======"The unexamined life is not worth living."-Socrates"Question everything. Especially Landfish."-Matt
Void, I think maybe that''s a little too analytical approach to the topic. His theory states that a system can''t be both complete and accurate, but generally that only applies to the real physical world. Generally when creating computer games we define the rules by which our systems work, and thus can make them both complete and accurate in how they describe how the game works. Also, the point is a little tangential to emergent phenomena, which are covered more under the subject of chaos theory.

As for how this applies to game design, I was basically considering it on a couple different levels. First off is the case of things similar to rocket jumping from Quake. A major aspect of gameplay that arose from the an unexpected form of interaction between two game objects. These kind of things generally turn out to be the most interesting aspects of a game. Not simply using the rules of the game as explicitly laid out, but discovering new and interesting ways of exploiting the interaction between those rules. My point is that game design seems to be more focused on just creating the rules, and not so much on the multitude of ways those rules can be made to interact with eachother and with the environment.

Also, as Wavinator pointed out, the concept of emergence can also be used in the actual game environment. Populate a world with a bunch of virtual people and give them a limited set of resources. At this point, some will find enough food to live, and some will simply die. Now give each person the ability to either attack or ally with other people they meet in this virtual world. What happens now gets a little more interesting. Occasionally you''d see a battle between these little virtual people over the prized resources, but you''ll also see them eventually build alliances and thus improve their ability to gather those resources. Those alliances that are best able to gather resources will become more and more powerful as they get larger and larger. Eventually you might end up with some interesting systems. Perhaps the alliances have formed into a bunch of medium sized clans all vying for control of the land. Or perhaps one or two of those alliances was luckier than the rest and has formed into full fledged kingdoms. At this point, social structures have emerged from the simple directives of gathering resources, killing, and allying. An oversimplified example I''m sure, but I think it gets the idea across. Maybe this kind of phenomenon could be better exploited in games of the future.
How can you know until the game IS finished and all the final details are in place? One little detail can and will change the way the game is percieved by every individual that plays your game. You can plan to promote elements like greed, desire and hopefully addiction. I personally find it easy to bring feel into my designs because for me this is were i start with my game designing... with feel. It''s also how i judge games (by their "feel") then interlecual stimuation, excitment etc.

I love Game Design and it loves me back.

Our Goal is "Fun"!
Um... I''m not talking about the feel of the game. I''m assuming by feel you mean the sort of ambiance?
Well you mentioned the human sentience didn''t you? That''s what i was talking about. The impression that a game gives someone when they play it/yes?

I love Game Design and it loves me back.

Our Goal is "Fun"!
I only mentioned human sentience as an example of what could be considered an emergent phenomenon that everybody is familiar with.

This topic is closed to new replies.

Advertisement