Loading...

ABOUT THIS SITE

What Science Has Forgotten

Many people think that science will eventually be able to explain everything that happens in nature, and that technology will be able to reproduce it. Perhaps that is so, but even then, that day lies far into the future. Probably a more likely scenario is that the further science and technology advance, the deeper the mysteries of the world will grow. Even with topics that we believe science has solved for good, when you take a closer look, you'll find that plenty of problems have slipped through the cracks or been swept under the carpet. Furthermore, these are often the issues that are closest to us and most important in our daily lives. Take hunches or intuitions or premonitions, for example. They may have rational-sounding explanations, but our gut feelings tell us something is not quite right after all. Such examples are not at all uncommon. When you think about it, there are lots of things that modern civilization has forgotten all about. Maybe the time has come to stop for a moment and try to remember. The seeds of forthcoming science and technology are impatiently waiting to be discovered among the things we have left behind.

HORIBA, Ltd.

Humans are Easily Bored Creatures

Intelligence as an Analogy

The words “Artificial Intelligence” are stirring people’s imagination these days. The naming is a triumph. However, people’s fantasies have taken on a life of their own and often get rather overblown. The “intelligence” in Artificial Intelligence is nothing more than an analogy, and the actual substance and mechanisms of AI are very far from intelligent at all. Frankly speaking, it is just programs that classify data according to a set task. It should be noted that computer terminology has always used a lot of analogies. For example, the device that is used to hold data is called “memory” after the human mechanism. If we were to refrain from using such analogies, it would be called a “record.” Or take “deep learning,” which also evokes various fantasies. Without using analogies, it is a mechanism that applies multi-layered filtering and processing to extract patterns from raw data. It can be superhumanly powerful for solving particular problems such as winning a game of Go or determining lesions from an image, but in the end it is just mechanized data processing. When people hear the word “singularity,” and the fact that it is supposed to be the point where AI will surpass human intelligence, they imagine something awesome, but once again we mustn’t forget that it is only another catchphrase. In actual fact we still don’t know what human “intelligence” is. At present we don’t know whether it is something we even could or should construct artificially. So comparing artificial intelligence with human intelligence is like comparing apples and oranges, and it doesn’t make much sense to talk about whether artificial intelligence will surpass human intelligence or not.

A New Image of Humans

Perhaps it would be better to talk about “algorithms” instead of AI, as Yuval Noah Harari suggests in his book Homo Deus. An algorithm is a procedure for solving problems. A cooking recipe is a good example. It is a formalized procedure: if you prepare the listed ingredients and follow the steps, you get a dish of hashed beef, and so forth. Computers are very good at mechanical processing according to such formalized procedures, or algorithms. It is an extraordinarily powerful mechanism. Among familiar examples, the search services and map applications we use everyday are all clusters of algorithms. Furthermore, there is all the data about the things that people do on the internet every day, as they search, shop, chat, etc. These data are so vast that is impossible for humans to look over them. But using computers we can formalize procedures to compare large amounts of data and extract patterns from them. In fact, such algorithms can also collect data on a global scale over the internet. When we see such algorithms at work, it is not unreasonable to get the impression that they exceed human stature and abilities.

Computers are winning against professional human players of chess, shogi and go. These are also good illustrations of the powers of algorithms. But actually it is just the same as with scientific experiments, a matter of setting up tasks and boundary conditions, and then see how far you can take it within those constraints. By limiting the task, and given the appropriate materials and algorithms to solve it, the computer can execute the processing without getting tired, and can also optimize within the frames.

Shogi is basically played one-on-one. An AI that learns to play shogi from records of past games may accumulate several hundred years’ worth of knowledge of many shogi players, and will become a model of the experience and wisdom of multiple humans, as it were. Game creators are also using this method now. They collect data on the in-game actions performed by tens of thousands of players all over the world, analyze them with computers, and then adjust or move the games in a more enjoyable direction. In the sense that AI abstracts and models human behavior, we might exaggerate slightly and say that there is a possibility AI will present us with a new way of looking at human beings.

Technologies that Don’t Bore You

One of my jobs is to create games. In most games we aim for a state that the player can enjoy for as long as possible. On the other hand, humans are spoilt creatures and easily bored. Lifting such people up, pulling them down, scaring them and reassuring them so that they don’t get bored is a kind of personal service.

One way of keeping players entertained that is frequently used in current Japanese games is to introduce new items one after the other. For card games, more and more new cards are added all the time. This is an extremely common way of pushing the point of boredom forward. When the player feels that there is nothing more to discover in the world of that game and it’s pointless to keep searching, he/she gets bored. If on the contrary, the player believes there still are things he/she hasn’t seen, he/she will keep playing. This is where the creator gets to show his/her skills, in knowing how to tickle the players’ curiosity.

One method is to use a game AI. The game estimates the players’ awareness and state of mind from their actions, and adds surprises and shocks so that they don’t get bored. If the game is too difficult, it is no fun to play, and if it is too easy, the enthusiasm soon dampens. The tricky part is finding the right balance. The AIs in charge of the monsters and characters that appear in new games are getting increasingly sophisticated. Old game AIs were actually very simple. They had only a handful of moves, for example, and the player soon saw through them. Now it is possible to create AIs that select complicated actions in response to fine changes in the circumstances, or make moves as if they were really planning something, or behave like a good partner that you don’t get bored with, all according to the designer’s ideas.

Game AIs also provide another possibility. That is, you can record all the actions in the game. If a certain player spends one year playing a game, you can record each and every action taken in the game during that period. The game AI can then analyze these data and reach a state where it grasps customs and habits that the player himself/herself is not even aware of. This is also a kind of analogy, but in that particular game world the AI gets to know the player better than the player himself/herself. Such mechanisms can be extended to other types of services than games. Shopping sites are already using something similar, but their recommendation AIs aren’t all that great yet. Since they recommend products based on what the customer has already bought before, they tend to stay within narrow limits. Obviously completely random recommendations wouldn’t be a good idea either. Once again, people wish to encounter something they haven’t seen before, but if it is too unfamiliar it is unlikely they will be interested. The recommendations should stand with one foot in the known and one foot in the unknown, but it seems that nobody can do it very well yet. I’m often disappointed when I get recommended my own books.

Insufficiently Tuned Computers

Let’s change the angle a bit and think in terms of the relationship between people and tools. Tools like books and notebooks are tuned so that they are easy for people to use, but computers haven’t been tuned that way yet. People have to adjust themselves to the tools. In order to do even simple things, you have to follow the (often over-complicated) rules and procedures that the guy who created the app decided. This is quite stressful. One response I always get when I give a task to students who want to become game creators or designers is “Teacher, it’s not possible with this app!” They have almost no awareness that they are limited by their tools. It is very hard to come up with the idea that if you can’t do it with the tools you’ve got, you could try making new tools or doing it by hand. Even in a field like product creation, where importance is placed on creativity, if you are unconsciously limited by your tools, your thinking will be limited too.

Or take people who are talking to speaker AIs. They often choose their words so that they will be easier for the machine to distinguish. In order to make good use of the tool, people bend over backwards to suit the convenience of the tool. Should this become a habit, human language itself may change before we know it, for better or for worse. Our views and our standards of evaluation are sometimes affected by the environment and the tools we use. In order to get some distance to tools like AI and other algorithms, and the computers that execute them, and in order to make use of these tools in a way that suits humans, we need to understand AI and computers without being overwhelmed by them.

Not Watson but Holmes

So how should we think? For example, the idea behind Leibniz’s “art of combination” provides one clue. People think and create by combining concepts and logic, but we never consider all possible combinations. So how about creating a machine that combines concepts by brute force? That’s the idea. It is something current AI could do: come up with combinations that would be unlikely to cross the mind of humans, experiment with them and play around with them. When they then throw them back at us, we encounter things we have never seen before, which may very well lead to new ideas. It’s a game that humans can’t play who are trapped by their own cognitive biases and their rationality in the narrow sense, and who always want to do something useful, but maybe AI can show us the way. There is a good possibility this kind of Leibnizian art of combinations might stimulate the human spirit.

In the past few years, research is making progress on AI that creates patterns without learning data. Traditional machine learning is data-based and supervised, but these are attempts to learn without data. What is important here is what you ask the AI, what tasks you set it. If the questions are boring, you can hardly expect to get interesting answers. The limitation of AI is that it cannot create problems by itself. This is where humans can really exert their creativity. For example, in the game industry, we often use an index called KPI (Key Performance Indicator) to grasp user trends and decide which services to provide. Usually it is sufficient to look at this standard value, they say. The number of users logged in per unit period, the percentage of users who continue playing, the percentage of users who spend money, etc. Sure, it makes things visible and is useful, but once you decide on an indicator, the indicator is all you see, and the target weirdly becomes to optimize the index rather than the game service itself. Whether AI and the mountains of data are taken advantage of or fall flat depends on asking the right questions.

AI can’t submit new questions or ideas, but its ability to find patterns is excellent. So let AI do the processing of huge amounts of data that humans couldn’t possible deal with, and mechanical tasks that would soon bore us to tears. We can use AI as a detective’s assistant. But it is still up to the human detective to decide what to look for, and how to evaluate what is found.

AInterview