Iyad Rahwan: "Artificial intelligence does not ask questions"

Artificial intelligence and humans are often thought of in opposition to each other. Iyad Rahwan heads the Center for Humans & Machines at the Max Planck Institute for Human Development and, in an interview with the GDI, has a completely different opinion. Humans would remain responsible for the big questions.
30 March, 2022 by
Iyad Rahwan: "Artificial intelligence does not ask questions"
GDI Gottlieb Duttweiler Institute

Artificial intelligence is on the rise. It is impossible to imagine our everyday life without it – algorithms suggest new music, films or products that we (might) like. So far, these are still baby steps, but how will AI accelerate human learning? Who will have the better ideas? Iyad Rahwan is Managing Director at the Max Planck Institute for Human Development in Berlin and a speaker at the GDI conference Discovery on Steroids: How AI Will Speed up Innovation on 5 July 2022. In advance of the event, Rahwan answered our questions:

GDI: AI promises to accelerate human learning. How?

Iyad Rahwan: In the short-term, AI will help us answer some very difficult scientific questions. If you already know your question, and have some data, AI can basically help you squeeze as much insight from the data as possible. One recent example is Deep Mind's AlphaFold, which tackled the 50-year old question of predicting the 3-dimensional structure of a protein from a string representing its amino acids. This feat was achieved by bypassing human ingenuity, and unleashing AI to extract as much predictive power from the data as possible.

To quote Kevin Kelly “With AI, Answers Are Cheap, But Questions Are The Future”. Can AI ask questions?

That's a good question, which an AI would not have been able to ask! There are ways to build AIs that ask questions, but this is still rather limited. You would have to essentially give the AI the high-level goal, and it can figure out what question to ask next in order to learn the most about your high-level goal. But what AI cannot do is ask: which goal should we have? This is much harder.

Who will have bigger ideas, humans or machines?

It depends what you mean by 'big'. AIs can now represent very high-dimensional correlations that are difficult for humans to describe in words. But revolutionary ideas, like major scientific advances or artistic innovations, will come from humans in the near and medium future. These ideas can be 'small', but they can have a massive impact on the world.

 Where are the limits of machine learning?

The main limitation of today's machine learning is that it cannot form its own high-level goals. It needs to be given an objective, such as improving prediction accuracy or optimizing a game score. Humans are the ones capable of, and responsible for, setting high-level goals, such as which aspects of human well-being to prioritize, and which ethical principles to adhere to. These high-level goals then drive and constrain the behavior of AI systems.

What can companies do today to utilize AI for innovation?

For a long time, our education system aimed to make humans more like machines: making error-free calculations, performing repetitive tasks consistently, and improving detailed work processes. Now that machines are getting really good at these things, companies need to focus human energy on the creative, open-ended part of problem solving and innovation. They need to hire people who deeply understand what machines are good at, and who use those machines in creative ways.

Share this post