Return to site

Experiment 1

Domain: robotic Title: emergent activity Goal: to have satisfaction

Here we present a video representing an artificial mental card with an emergent activity.

Hypothesis: the robot is active in an unknown environment to execute a set of action according to goals.

We use Aibo recognition for objects, colors and persons. For the video, the robot recognize these entities in the test environment: Alain, Mickael, Ball, Bone, Battery and Pink (two persons, two objects and one color, with a recognition rate of 90 per cent), and respect the four phases for the interpretation of data:

  1. Data transit in the multi-agent system. At this phase, it is impossible for a human to interpret information, the behavior of the system.
  2. System interpretation of information in order to create an emerging form with morphology according to goals.
  3. Choose a specific form with morphology to direct the system.
  4. Create an action plan with the emerging form. 

During the processing, we name a focal point a set of emergent knowledge evolving in real-time according to sensors values. It is possible to modify the robot memory in real-time.

To realize the video, we had a specific configuration:

  • An eleven mega octets ontology (with verbs, persons, objects, simple emotions, mixed emotions, colors, physical capacities etc). 
  • Acquaintances between knowledge representing the robot experience.
  • More than one thousand agents.
  • More than seven thousand threads.
  • All Aibo sensors and effectors (Sensors values evolve in real-time).
  • A simple goal: to have pleasure (linked with Ball, Play, Alain, Cushion, Pink, Cover and Bone) or to have dissatisfaction (linked with Work, Generator, Alain and Fatigue).
  • A PowerBook G4 1,33 Ghz with 1,2 gigabytes.

In the video (.mov), there is five windows:

  1. Window about the agents (with seven threads per agents) to show the organization complexity. At this point, it is impossible for a human to interpret information, the behavior of the system.
  2. Window about ontology and metric relations between basic concepts values on matrices to explain the variations.
  3. Window of the emergence of the focus point: an emergent group of agents, a "thought".
  4. Window to expose the constraints: numbers of agents and number of sensors and effectors.
  5. A terminal with logs concerning orders sent to agents to direct the system towards goals.

At the beginning of the video, we have two windows. The window on the left is a terminal which shows messages concerning the control of the multiagent system. The window on the right is the focal point of the system, the "thought" which evolve in the time line according to inputs, knowledge, experience and goals. In this window, a yellow circle is an agent group, ie a set of agents which emerge in a same time. Each agent can have acquaintances with agents in its group or with agent in other groups. These acquaintances are the experience of the robot.

Step at time 17 s: we see the window with the system mental card (named paloma in the video, the project code name). Each point represents an agent. Metric system used to place the points is an algorithm building groups of dots according to the different roles. Currently, it is a static representation, there no animation on this window in the video.

Step at time 32: we see the window named "Entities Configuration" which expose the system parameters, ie, agent number, role number, goals, sensor number, effector number and network configuration for the robot.

Step at time 40: we see the window named "IHM" which expose the employed method to modify in real-time the acquaintances between knowledge, so the robot experience, with a matrices stacking.

Let us now explain the set of figures presented in the focal point window during the processing. There is three important sequences:

  1. Between the second 1 and the second 14, three knowledge emerge on the focal point: "Ball", "Bone" and "Sympathy". Elements "Ball" and "Bone" are in the system inputs, the scene, so it is a direct emergence. But for "Sympathy", it is an experience. In the past, the robot had sympathy for a person linked with a ball or a bone. During this period, the robot has a physical behavior because "Sympathy" is linked with physical capacities.
  2. Between the second 15 and 18, the focal point is directed towards "Bone", we note that this knowledge is important in the robot experience.
  3. Between the second 19 and 59, there is a set of figures which show the importance of the roles "Bone" and "Sympathy". This importance is completed by a new role: "Front of" with another set of figures between the second 50 and 112. During the processing the robot idea is more and more precise, just like the robot physical behavior.

It is the last public presentation according to the system features. There is a set of patents concerning several methods used to create the system. The site will be updated with new experiments related to the features presented in this page. This video was created with Snap Pro X in parallel of the artificial brain processing.

All Posts

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly