C. Baray, Effects of population size upon emergent group behavior, Complexity International, 1998, Vol. 6, He developed a system in which agents were able to extend their life spans by coordinating their actions via undirected communication. Multi-agent systems offer advantages over single agent systems. Redundant properties provides robustness with graceful degradation. Simple control architectures are often sufficient for the agents since multi-agent systems take advantage of emergent behaviors (inter agent, agent environment). He created a method through which one can measure the effectiveness of a group's behavior. He examines the relationship between group size and the group's performance. Distributed artificial intelligence has worked on the problem of coordination. Primarily sophisticated individual agents (BDI) are used. Ferber has worked with reactive agents (stimulus-response machines, without any state or planning). Mataric works on coordinating simplistic agents. McLennan and Burghardt evolved a communication system in their model (Noble and Cliff critiqued their work). Yanco and Stein trained via reinforcement learning two robots (leader, follower). Saunders and Pollack utilize continous channels for communication instead of discrete. All theses systems show communication between two agents. Werner and Dyer gave a model of communicating between multiple agents (female, male). They created BioLand where the agents were modeled after Braitenburg's vehicles. He uses a toroidal world. A homogeneous population of agents. Each agent has a health value. There are areas in the world which increase or decrease the health. Further mobile predators are part of the environment. His agent system is rule based. He defines a measure for the group behavior, namely coordination advantage = average lifespan/population size. M.J. Mataric, Designing Emergent Behaviors: From Local Interactions to Collective Intelligence, From animals to animats 2, 1993 Their research considers social interactions leading to purposive group behavior. Emergent behavior is one of the most interesting topics in swarm intelligence. It is characterized by a) it is manifested by global states or time-extended patterns which are not explictely programmed in, but results from local interaction b) it is considered interesting based on some observer-established metric. She says that analyzing and predicting the behavior of a single situated agent is an unsolved problem in robotics and AI. Interactions between individual agents need not be complex to produce complex global consequences. She gives various types of local interactions: collision avoidance, following, dispersion, aggregation, homing and flocking. Inspired by simple basic avoiding behaviors in insects she devises the following avoidance behavior: If another agent is on the right{ turn left } else {turn right }. This behavior takes advantage of the fact that agents are homogeneous. She combines basic behaviors into more complex ones. These behavioral combinations cannot overlap in time but in space. G.M. Werner, M.G. Dyer, Evolution of Herding Behavior in Artificial Animals, From animals to animats, 1993 They have created a simulated word, called BioLand, where they simulated the evolution of herding behavior in prey animals. Additionally a population of predators where put in the simulation. An evolutionary pathway is seen to this herding, from aggregation, to staying nearby other animals for mating opportunities, to using herding for safety and food finding. They model the behavior of the biots by means of a neural network. Each biot has sensors with sensor units associated with it. The same for motor neurons. Additionally each biot contains three hidden units and higher order gating (axoaxonal) connections. The architecture is allowed to evolve over time, where crossover (one on the average) and mutation (10% chance) are applied at the bit level. Rarely networks with hidden layers are evolved with this encoding thus separate genome which encodes axoaxonal - multiplicative connections that gate neuron-to-neuron connections - connections. The placed 8000 prey and 8000 predator animals into a 1000 by 1000 toroidal environment. Each genome can encode up to 25 regular and 25 axoaxonal connections. The maximum length of the genome was 400 bits, they say that there are 2^400 total possible biot neural architectures. They used as steady state approach. J. Ferber, Reactive Distributed Artificial Intelligence: Principles ans Applications The field of DAI distinguishes between cognitive and reactive MAS. Cognitive agents have symbolic and explicite representation of their environment on which they can reason and predict future events. -> BDI architecture. Reactive agents do not have explicite representation of their environment, they act according to stimulus/response behavior. Interestingly out of a simple architecture, possibly a complex behavior can emerge. MANTA(Modeling an ANThill Activity) project intorduced for simulating insect societies. The notion of cognitive costs is mentioned, which specifies the complexity of the overall architecture needed to achieve a task. Cognitive economy is the property of being able to perform even complex actions with simple architectures. Reactive agents need companionship. Reactive agents are situated, their action is based on what happens now. Robustness and fault tolerance are two of the main properties of reactive agent systems. Behaviors of agents are strongly dictated by their relative position in a topological strucure (BUT we developed a system with, e.g. position invariant wandering behavior). Social differenciation may be achieved through specialication of agents to a certain type of stimulus. He states two types of feedback, (1) local feedback, designed bay agent designer, (2) global feedback wich emerges based on inter agent actions. Global feedback is not always deterministically predictable, they often result from autocatalytic processes, which result from interactions in open systems. Architecture of reactive agents - Situated rules, provided by a finite number of rules - Finite-State Automaton and Subsumption Architecture, considered as a typical architecture to build a behavior-based agent. - Competing Task, entities try to get control over the actuators of the agent. - Neural networks, Simulation Traditional Techniques, The following differential equation show the formula defined by Lotka, Volterra which expresses the rate of growth of predator and prey populations: $$\frac{dN_1}{dt} = r_1N_1 - PN_1N_2 \quad\quad \frac{dN_2}{dt} = aPN_1N_2 - dN_2, $$ where $P$ is the coefficient of predation, $N_1$ and $N_2$ are prey and predator populations, a efficiency which predators convert food into affsprings, $r_1$ is the birthrate of preys and $d_2$ is the death rate of predators. MA simulation, Behavior = set of actions an agent performs in response to its environmental conditions, its internal states and its drives. The purpose of simulations is to consider quantitative (e.g. numerical parameters) and qualitative (e.g. individual behavior) properties of a system. In a MA simulation the model is a set of entities that can be described by the quadruple $$,$$ where agents is the set of all the simulated individuals, objects are passive entities that do not react to stimuli, environment is the topological space where agents and objects are located, communication is the set of all communication categories. M. Sipper, M. Tomassini, O.Beuret, Studying Probabilistic Faults in Evolved Non-uniform Cellular Automata ***************************************************************************** - What means graceful degradation? -> Definition of degradation - Two views of GD, (1) number of places visited as random variable with statistical number, (2) detailed analysis of interaction patterns. -> (ad 2) In our system rules evolve, which are equivalent to Mataric stated rules. - Who has worked on this topic? - Our work is partially inspired by Steels. - Our system is a homogeneous system basically. - In our system stimulus are dynamic and are equivalent to agent meetings, which results in agents which are not specialized to a certain stimulus. - Output of neuro controller is a strategy. - In our simulation agents have active and passive role, if it's there turn then they are active and all others are passive. - Our experiments are a function of the number of agents