Fantasy Walls and GPT-3

There is a Korean fantasy fiction called "The bird that drinks tears." It's a rare example of a fantasy fiction that got rid of many of the cliches in Western fantasies and replaced them with Korean concepts and cultures.


In the story, there is a particular artifact whose name translates to fantasy steps or illusion steps. They are artifacts that respond to your expectation and materialize your imagination. The catch is that the materialized object can only affect the person that imagined them. Only those who imagined them can see the objects, and only those who conjured them can feel the objects. Some people figured out that they could use the artifact to crystalize their thoughts and discover unrecognized implications. You could write on the wall just by imagination and edit them as needed.

전체 문장을 자유자재로 변화시킬 수 있게 된다면 그 다음은 행간의 의미들을 보다 뚜렷하게 하고 그 의미들을 이용하여 전체 논리를 체계화하십시오. 예. 아마도 책을 정독하는 것과 비슷하다고 생각하실 겁니다. 저도 그렇게 생각합니다. 다른 방법이 있을지도 모르겠습니다만 제가 터득한 것은 이 방법뿐입니다. 책을 읽는다는 것은 그 책을 그대로 암기하는 것이 아니라 책을 이용하여 자신의 머릿속에 또 한 권의 책을 만들어내는 것과 비슷하다고 생각됩니다. 다만 당신이 만들어낸 구조물은 그 머릿속의 책을 현실로 시각화시켜줍니다. 그러다가 당신이 예상치 못한 문장들이 등장하게 되는 것을 보게 될 겁니다. 그것은 아마도 당신의 직관력이 찾아낸 문장이거나 결론일 겁니다. 그렇잖으면 문장들 자체가 스스로 이끌어낸 결론일 수도 있지요. 그런 문장들을 이용하여 다시 전체의 일을 반복하십시오. 당신은 당신이 알고 있는 것들을 완벽하게 체계화할 수 있을 겁니다.

The process is oddly similar to my method of writing. I've used the technique frequently, but I never really gave a lot of thought into how it worked. It just did. After encountering GPT-3, I see some of the essences behind the process.


For those of you who are not familiar with GPT-3, it is a large language model recently created by OpenAI. They trained to predict the next word given the previous words. This exceedingly simple method produced a "dumb" language model that has demonstrated an astonishing amount of capabilities. People created demos of using GPT-3 to create Figma designs, chart generation, and of course, for different kinds of writings.


Albeit impressive results, GPT-3 still is a basic pattern matching model. GPT-3 has no memory, cannot learn anything new, nor does it perform proper reasoning. It just predicts which word should come after the given words. But it is still remarkable that this simple model knows a lot about the world. You see an illusion or, rather, evidence of intelligence.


It now seems to me that much of my thoughts are basic pattern matching. One of the core elements of my writing process seems to be sampling sentences conditioned on the growing ingredients. While I still learn from my writings and perform reasoning, the sampling process is also a significant part of my thinking process. I can easily imagine a future where people use powerful language models as exploration machines to interrogate themselves for ideas unnoticed within themselves. Some people even called GPT-3 "a race car for the mind," just as Steve Jobs described computers as "bicycles for the mind." I don't know whether such large language models would become a race car for the mind, but I strongly agree with the sentiment.


This article can be found at http://shurain.net/blog/fantasy-wall/ as well.