Dr. Ben Goertzel, Creator of Sophia Robot, Hopes for a Benevolent AI Future – Info AI
Dr. Ben Goertzel is the founder and CEO of SingularityNET, a blockchain-based AI marketplace. Goertzel is also the chief scientist of robotics firm Hanson Robotics, and chairman of the Artificial General Intelligence Society and vice chairman of the futurist nonprofit Humanity +. Hanson Robotics is the creator of Sophia, the social humanoid robot that has been covered by media across the globe. He was recently interviewed by AI Trends Editor John P. Desmond.
Q. How is Sophia doing?
A. Sophia is improving all the time. Sophia is a very cool research platform. We are using the robot to experiment with a variety of different AI tools and technologies. It has been quite interesting to see how people react when interacting with the robot. Many people personalize the robot emotionally more than I tend to, because when you know how all the motors and wires and the gears work and what all the software code is doing, you relate to it a bit differently.
Q. What have you been creating lately?
A
Q. You have taken a unique approach to buying AI services on SingularityNET. Could you elaborate on your approach?
A. In December 2017, we created our own cryptocurrency called the AGI token, and we sold some of those, which can be used to buy AI services on our network. Some of the tokens were bought with bitcoin, more were bought with Ethereum, and some were bought with U.S. dollars.
Q. Some have observed that companies heavily invested in Big Data are capitalizing on personal data unbeknownst the individual. Can you describe what you see as the abuses of the Big Data monopolies?
A. I think there are outright abuses, and then there are just undesirable social trends and patterns. Also, what starts out even with benevolent motives in the end can sometimes have potential to lead to bad things. Avoiding abuses of data and AI, and using data and AI for broad human benefit instead, is part of the motivation for my colleagues and I throwing ourselves into this new and weird decentralized ecosystem and creating SingularityNET as a blockchain-based project.
Germany’s Project Shivom, which is a partner of SingularityNET, is making a decentralized genomic platform where you upload your DNA sequence, and then control its use by giving permission to use it only to selected research projects. Homomorphic encryption can also be used to expose only certain aspects of one’s DNA data to certain projects. Control is put in the human whose data is at issue, rather than in the company that controls the big database. The decentralized way is better not only in terms of preventing outright abuse, but in terms of letting the utilization of the data be democratically directed.
Q. Very compelling. Along the same lines, what does the future of democratized, open AI look like to you?
A. You can think about humanity being in the early stages of creating a global brain. We have created a network of different AIs: some of them are very narrow in focus and some of them more general in their ability to think. This network of AIs is circling the globe, accessing data from all sorts of sensors and databases, talking to people, taking video data from security cameras or people’s phones. One may ask, “What is this emerging global brain thinking about?” Is it thinking about how to kill people of the wrong race, religion, and nationality? Is it thinking about how to sell people the chocolatiest chocolates or the latest luxury brand’s purse? Is it thinking about advanced scientific research and world peace? How to cure disease and prolong life?
I would like to see what the global brain thinks about to be determined in a richly participatory way. Perhaps we should make a web app where people vote on what the global brain should think about. Certainly, when you’ve uploaded your web search history or DNA data, the software processes constituting the global brain should ask for your permission for how it’s used. “Can I use this web search data for racial profiling? Can I use this web search data to decide what ads or news articles to show you? If decisions about how data and resources are used are up to the individual, then what the global brain thinks about is defined in a more democratic way. That will lead to a more intelligent global brain than having the thought process be dictated by the desire of one country to dominate others, or the desire of one company to maximize the value accruing to its shareholders.
Q. Would this be like benevolent artificial general intelligence (AGI)?
A. None of us knows what a benevolent AGI is going to look like. But I think it’s equally important as making AI systems more generally intelligent, to make them understand and absorb human values and culture. However, you can’t write down the 10 or 100 human values in an orderly and useful way. Asimov’s Three Laws of Robotics were basically designed as a demonstration of why that doesn’t work. Human ethics, morality and values is more complex, self-contradictory and subtle than that. It is far more than any list of explicitly formulated rules.
You get human values and culture into your AI by interacting with the AI emotionally, socially, and informally in shared perceptual, social, and cultural contexts. And even if the smartest AIs don’t have a human body or face, they can be connected on the back-end with an AI like Sophia that allows the overall AI network to interact with people in a way that’s richer.
Suppose we get an AI that’s not only smart, but has absorbed the better aspects of diverse human values and culture. My hope is that we will create an AI that’s much smarter than people, and also beneficially inclined toward people. We don’t want a robot dictator trying to tell us to obey because it’s smarter and knows what’s best. As opposed to that you would hope we have a sort of AI nanny that lives in the background, and human society goes on in a beneficial way, subtly guided and regulated by advanced AI.
Once AI has advanced sufficiently, people won’t have to work for a living, because robots and other forms of automation will be doing all the work. Your voice-controlled molecular nano-assembler will print whatever materials you want and human life will proceed on its own. But the benevolent AI is there in the background just in case something crazy happens — to stop some lunatic from creating a briefcase nuke or something.
Q. Shifting gears a little bit. What’s the business or revenue model for SingularityNET?
A. Our business model is two-fold. One aspect of the business model is that it’s a platform. The economic logic is set up so that when entrepreneurs put their AI on our platform and sell their AI services to customers via the platform, then that increases the value of the network. The economics of it are a little subtle, because we don’t charge a fee, per se, but we have a custom cryptocurrency AGI token and there’s a fixed number available. Some of the tokens are held by the SingularityNET Foundation, and the value of this store of tokens can take the place of revenue that would be obtained by charging a fee.
The second piece of the business model is that we’ve released some of our own AI services to sell on that platform, for things like genomic and medical data analysis, controlling social robots and social network analysis, along with basic AI services like data analysis using neural networks or evolutionary learning, or machine reasoning or natural language dialogue using OpenCog. So, there’s the platform and there’e some things we’re selling on the platform.
Q. What AI technology platforms are you currently using? And how is AI code activated on SingularityNET?
A. We’re using OpenCog, which is our own AI platform working towards general intelligence. For neural networks, we’re using TensorFlow because it’s popular and simple to use. We’re also working with something called BriCA, which is made by Japan’s non-profit Whole Brain Architecture Initiative. Getting the 10,000 small, open-source AI projects on GitHub in our SingularityNET platform is a goal. I am ultimately interested in making a decentralized society and economy of AI minds where the AI agents populating that society are drawn from an abundance of different AI tools that Ph.D. students and entrepreneurs have created and largely already placed into GitHub and other online code repositories. [Editor’s Note: GitHub was acquired by Microsoft in June for a reported $7.5 billion in Microsoft stock.]
AI code put into the GitHub repository just sits there until someone with suitable technical ability downloads it. Whereas, if people put their AI code into a Docker container online and have that container activated in SingularityNET, their code is live and active. Anyone can use the code and any other AI agent can interface with it.
Q. Can you talk about where you are in the development and adoption of SingularityNET?
A. There’s a lot of enthusiasm. We have an alpha online – suitable for exploring integration with AI tools and for research – and we’re working toward having a scalable beta. We’re talking to several companies about how to use this platform to solve various problems they face within their own internal IT systems.
Our plan is to push from SingularityNET alpha to beta this year, along with putting more of our OpenCog, neural net and evolutionary learning-based AI into this network, and creating services on top of this. For our services, we’re looking at verticals like social robotics, biomedical analytics, and social media analytics. We want other AI developers all over the world to put their own AI algorithms and services in the network. The real fun comes when different AIs start outsourcing substantial amounts of work to each other, sending data back and forth, rating each other’s reputations in different aspects. Then the different AI agents of the network are no longer acting in an isolated way, but you have a society and economy of minds, which has its own emergent intelligence beyond the sum of the intelligences of the parts. That’s our plan for the next few years and how we intend to create a democratic global brain, spawning a benevolent technological singularity, much as my friend Ray Kurzweil has prognosticated.
Q. Good luck with the project. It’s good to know someone is working on it. Any final thoughts that you’d like to share with our readers?
A. Times are getting more interesting. I’ve been working on AI for more than 30 years, and now we are finally in an era where more of the world can appreciate some of the things that many of us have been talking about and working on for decades. We are in a position to bring so many of our long-nurtured science fictional dreams into reality — with the help and participation of people from around the world.
Learn more at SingularityNET.
Article Prepared by Ollala Corp