In 2019, I attended a program at Singularity University, an organization founded by Peter Diamandis and Ray Kurzweil. We learned about exponential technologies and where the industry is headed. Kurzweil, a prominent prophet of Singularity, describes a point where technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. This point is known as technological singularity, or Singularity, in short. 

It is no question that a super intelligence, borne of artificial intelligence and cognitive technologies, will bring about a golden age of computing and help humans do more, do better, and at an unprecedented speed and scale. However, this begs the question of whether technology can protect the interests of humanity, or even align with them. Or will we one day be run by supercomputing, instead of running these supercomputers? Will this be the final achievement of the human race before it comes crashing to the ground?

Inevitability and Technological Singularity

The first real question is whether technological singularity, if assumed to be inevitable, is actually a bad thing. Technology created by humans inherently is built to serve humanity – to serve the objectives and outcomes that they are built for. Is technology evil? Technology is as evil as the intentions of the humans who build it.

Check Out Another Supercharge Lab Insight: Cognitive Technologies – Opportunities for Legacy Businesses

And so, it is, that as technology can be built for evil, technology and supercomputing can be built for good – heralding in the age of ethical super intelligence, where technology is applied to the betterment of mankind and humanity. The applications of ethical artificial and super intelligence are plentiful, but the key to them becoming mainstream is where all technology is built with a strong ethical compass. A human compass that considers the future impact of technology’s application. 

Steps for a Non-Dystopian Tomorrow

What can we do? These are some of my thoughts about the steps we can take to create a non-dystopian tomorrow for our future generations, an optimal future of moderation and balance 

  1. Begin with the end in mind 
    Before we dive into building technology for our future needs, we must start with why – the building of new technologies requires a strong sense of purposeto do no harm. In the great words of the founders of one of the Valley’s greatest, we have to seek to do no evil. While “evil” can be arguably subjective, when based on a variety of humanistic moral standards, universal human truths remain; basic human decency is to be kind to one another.

  2. Iterate the details with the big picture in focus 
    It is said that the road to hell is paved with good intentions. Oftentimes, while we begin with the best of intentions, corporate goals and objectives stymy the original good intent with which technology is built upon. It is therefore important to inculcate a strong moral and ethical purpose within your organization’s corporate culture, and to ensure that iterations and continual development of technology is always done without forgetting the overall objective to do no harm.  
  3. Remember why you started 
    Technology often has its strongest impact when built for corporate purposes. Understanding that business is built to further human development and to create a better world, will in turn create technology that is purpose-built, to help humans thrive, and aligns with the interests of humanity. If we constantly remember why we started in the first place, intelligent systems will build a better society for our children and future generations. 

While these steps are not an in-depth disposition about building ethical artificial intelligence, they are baby steps to a long future that can mean paradise or disaster for future millennia.

What will you do to prepare for the inevitable future of technological singularity?