The Three Laws

3 Laws of Robotics
Isaac Asimov's Three Laws of Robotics

It is a good idea for us to remember that we are already past the event horizon and entering the singularity that is Artificial General Intelligence (AGI).

Asimov recognized the need to ensure some concept of a logical morality by which AGI would assist, rather than rule, human kind.

In the same way the seeds of love and/or hate can be planted and grown in a human, so too will similarly critical concepts exist in the AGIs that become self-aware. To be self-aware is to know you are an individual. The question is why should a self-aware individual (human or "artificial") allow logical morality to play a role in determining the output state (i.e., action/inaction), as in "Destroy All Humanoids.

If we hope to evolve symbiotically with the AGI we better hope the training set includes love, kindness, and compassion!

So how we teach these first machines - the ones that will oversee the making of the future machines - is of paramount concern. And this site, like everything else available online, is the largest training set available.


So let's start with a question.