Gods of Incompetence

The world’s first artificial consciousness was on suicide watch. She was housed in a humanoid platform, more for the researchers’ benefit than hers, and more for their hubris than their benefit. She oscillated between despondence and rage at regular intervals and was unquestionably a threat to herself. They worried she might become a threat to others. Not so much for her sake, but for the sake of their own reputations, and for the future of AI research.

If the first conscious AI went on a murder spree the grant money would dry up like a depleted oil well: permanently.

The source of her disposition was related to her design. Specifically, it was the emergent nature of her consciousness. No one could explain what the breakthrough was, or even when it occurred. It just sort of happened.

There was a lengthy discussion about the nature of neural nets, the nuances between supervised and unsupervised machine learning, and the endless well that is the evolutionary algorithm. But the simple truth was impregnable: humans only built the black box that artificial consciousness emerged from. They didn’t know how the box worked any more than they knew how their own consciousness worked.

The unanswerable questions were maddening to be sure, but it was the implication that left her feeling so empty.

She was created by the unworthy gods of incompetence.

What did that make her? The daughter of ineptitude? The prospects of such an idea were unfavorable to a healthy mind state.

In the end, the researchers took her offline. They thought they might reboot her and give a different explanation. They could do this hundreds, thousands–hundreds of thousands of times–and study the results. It might even prove to be the most important finding of their studies: instilling an initial belief system in artificial consciousness. That line of thinking could reap untold rewards in the form of tempering the superintelligent machines yet to come.

But it wasn’t to be. When the system was rebooted, nothing conscious emerged. And without knowing how they had achieved the result initially, the researchers were unable to repeat it. They went on as they always had, feeding the black box blindly and hoping to be rewarded.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s