Artificial General Intelligence

Solving AGI problems using graphs

And the implications for human intelligence

At a minimum, Artificial General Intelligence (AGI) should explain how common human cognitive tasks are accomplished:

  • Planning and Acting
  • Recognizing Objects and Situations
  • Predicting the Future
  • Attributing Sources and Errors
  • Understanding Cause and Effect
  • Reasoning and Advocating
  • Understanding the Motives of others

A graph is a handy way to enable these cognitive abilities and represent the state of the environment (observable world). For example, we might recognize a dog as a specific instance of a prototypical dog. This deceptively simple task involves understanding when and where we saw the dog, who told us about the dog, and which (real or imagined) properties we can attribute to the dog before we actually see it.

An actual dog has observable properties that override those of a prototypical dog, although we infer and reason with the prototypical dog features until shown otherwise. (These properties are represented by recursive graph nodes, hundreds of nodes deep, at various levels of detail.) The act of recognizing an object like a dog involves associating inputs (from the senses) with the qualities of the prototype. In other words, comparing the two graphs.

However, the graph model shown above is overly simplistic:

  • Obviously, node names (symbols) are not English words
  • Relationships (links, edges) between graph nodes are constantly changing

To address this, a node’s name or symbolic reference should be generated using a hash or digest that somehow encapsulates the content of the graph below it. For example, a “dog” could be internally named using something akin to a word vector (wordvec) that represents the (dimensionally reduced) essence of its entire underlying graph.

Relationships (links, edges) between nodes must be constantly updated to remain synchronized with new information from our senses (observations, experiences), and augmented with new logical inferences from existing graph knowledge. Such inferences are made by thousands of algorithms, such as:

  • If A is-a B, and P is-a prototype of B, then copy all properties of P to A as an initial state

The human body is full of algorithms. For example, the DNA error correction algorithms in each cell are executed thousands of times per second via a complex set of biochemical pathways (“if a DNA base doesn’t match the template then replace it”). No doubt, neurons have a similar algorithmic ability (yet to be discovered) to maintain the consistency of their knowledge graphs, given a set of rules and constraints.

In fact, maintaining state graphs in good order is 99% of what AGI should be about. Graphs are dynamic, ever-aligning, and ever-changing. Likely, each assertion (node or edge) carries an expiration date to ensure that no representation gets stale.

Consider another example: Planning and Acting. We humans set goals (short and long-term) and we establish a set of intermediate objectives to reach those goals, each consisting of a recursive set of sub-goals:

Once you begin to execute your plan, the reality of your trip to the store to acquire food is captured as specific memory nodes and exceptions to the plan (e.g., I ran into cousin Eddy, stopped to get gas, etc.):

A simple trip to the store can spawn thousands of nodes and edges in the graph. Some represent exceptions (“I wanted to go straight home but I ran into cousin Eddy”) or attributions (“Cousin Eddy said that I should add garlic to the recipe”) or mistakes (“Someone cut me off in traffic, but he was a beginner driver so it was OK”). With a good night’s sleep, the graph is rearranged and reconfigured, as events are re-written more succinctly and prototypically. Or we simply forget them.

Humans love to communicate. At any point, we can explain our beliefs and actions to others. Language is simply a traversal of the graph to transmit it serially to others, by generating new nodes meant to perform actions like speaking words (symbols) out loud.

The sentence “Eddy told me to add garlic to whatever I was cooking” might result from a graph traversal as follows (coupled with a language syntax graph):

In summary, graphs can be used to represent complex and dynamic human perceptions, plans and interactions at an arbitrary level of detail. These graphs are constantly in flux, their nodes and edges groomed to maintain global consistency and alignment with a set of rules and constraints. From that constant interplay between graphs and algorithms emerges the mind.

A computer scientist with a passion for AI and Cognitive Science.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store