Hey, this has been a success! I learned a great deal and finished off the class with an A! I hope some of you can find this interesting or useful. Feel free to shoot me a message on the site if you have any questions!
May you find your quest for knowledge to be a satisfying one!
Copycat is a program designed by Hofstadter that through analogy mimics actions made by the user. This can be a little hard to explain so I will give an example; the exact same example Hofstadter used in his book to design his program.
Suppose the letter-string abc were changed to abd; It would then be reasonable to hand Copycat the letter-string ijk and ask if to change it in the same way. Copycat may produce the string ijl, seeing as the first string’s change was to replace the third element with it’s successor.
Hofstadter goes on, continuing his discussion of what Copycat is and does. However, what struck me as interesting was how he related Copycat to creative human thinkers. Creativity is an illusive concept, one that, for me anyway, seems to be a greater challenge to map to a computer system than most other every-day human thought processes.
Something about creativity speaks out to me as one of human-kinds greatest abilities. Elizabeth Gilbert does it a great justice in the following video. She notes the close connection between creativity and genius, how creativity can easily escape us, and how we can easily get upset about how carelessly and capriciously our creativity can come and go. I find it is a struggle to be continuously creative; at times there is very little we can do to get the creativity flowing, yet sometimes, out of nowhere, amazing concepts have a way of popping into our heads. This is what Copycat does. It make analogies about a given scenario (albeit a small micro-domain), but it doesn’t have to worry about running out of creativity. Does this mean that Copycat is a genius?
In the English language, much like most languages there exists words which have multiple meanings. However, when we think on certain words we feel as if they are “unified” to a single concept. For example, Hofstadter makes this example clear with the English word “hard.” When you think of this word, usually one singular meaning pops to the forefront; specifically “not soft”. However, there exists many different meanings for this word.
Synonyms of hard: difficult, callous, compact, dense, firm, hardened, impenetrable, inflexible, rigid, set, solid, tough, unyielding, adverse, industrious, inelastic, strict, severe, etc.
It is easy to see now how there exists many different interpretations, where hard can take on different meanings. This makes it incredibly difficult to program computers which can conceptualize and think in the way humans do. A machine can attack a program from a given, pre-programmed perspective, but it becomes difficult to design systems which can switch perspectives on the fly, and know when to do so. Humans can do this with ease. As a human reads different sentences, the word “hard” can take on different meanings depending on the context. Machines would need a true understanding of real-world knowledge to truly achieve this.
A salient, and present example comes to mind. A computer translating a sentence from English to German with the word “hard” in it may give a direct translation of the word but because of its lack of any true real-world knowledge may end up translating it incorrectly. Machine translation is a difficult area for computer scientists and computational linguistics; yet it is one that computer scientists used to believe would have been solved by now.
Philosopher Emanuel Kant suggested that there exists two main structures which house human perception. The first of which is Sensibility. This is the low-level sensory input we take in constantly via our retina, ear drums, etc. It is like the tape recorder or video camera which receives input but does nothing with it. He thought that sensory perception was uninteresting.
The second part of perception is that of deriving meaning from the sensory information. This is considered “high-level processing,” according to Kant and Hofstadter. For many reasons this view of the human thought process makes sense. However, for some reasons it still does not. For one, it does not explain the fundamental, physical, internal structures necessary to actually do the storing or processing of the information.
In reality, this is all just a philosophical exercise. We don’t know for certain what biological elements store memories or even how we conjure up meaning when we view real-world objects. Additionally, since these are mostly subconscious processes, it is difficult to define what is actually taking place since we aren’t aware of what is going on. At most we can do a little guesswork and/or relate these processes to those we are already familiar with. For example, where does object recognition fall? Is it sensory-input alone, or is it mostly internal in our brain? Some would argue that we need to know what an object is before we can recognize it. But, how did we come to recognize it in the first place; and where is the object stored in our brain?
AI has at many times attempted to create machines which can be described as “thinking machines”. However, a great many of these attempts have produced results which only appear to have worked. All too often people view or hear about the results of one of these machines and equate the ability of these machines to that of true human intelligence. This is known as the Eliza effect: an effect named after Weizenbaum’s therapist program, dubbed Eliza.
What the Eliza program does can be easily misconstrued as human, so it is not too hard to imagine how one might fall victim to the Eliza effect. The program emulates a Rogarian therapist as the user communicates with Eliza by typing text via a terminal. Eliza seems to respond; often taking the user’s own words and asking the user questions about these words. Here is example text from the Eliza program:
- Human: Well, my boyfriend made me come here.
- ELIZA: Your boyfriend made you come here?
- Human: He says I’m depressed much of the time.
- ELIZA: I am sorry to hear you are depressed.
- Human: It’s true. I’m unhappy.
- ELIZA: Do you think coming here will help you not to be unhappy?
One can see how it is not a too far of a stretch then to let our imaginations get the better of us and believe that the computer empathizes with us and truly understands what we are relaying to it. Even today people can easily fall victim of this trap and feel an emotional tie to a machine that simply does not understand anything at all. A similar example is that of the uncanny valley. “The uncanny valley hypothesis holds that when robots and other facsimiles of humans look and act almost like actual humans, it causes a response of revulsion among human observers.” We may be approaching the uncanny valley (consider computer generated movies and video games characters), but how long until we can interact with machines that truly understand what we are telling them?
Also, to try out an Eliza-like program yourself, visit Yui Chan, the Socratic Zen Conversationalist.
Numbo is a program designed to fluidly (as if a human were doing it) solve problems similar to those on the French television show Le Compte Est Bon. What strikes me as interesting is the apparent magic involved in its design. Here’s a short breakdown:
The machine operates by housing a list of sub-programs that do all of the work.
These sub programs have an urgency associated with them. Apart from this, they are chosen at random from the list.
These sub-programs do things like add numbers together, multiply them together, and can break numbers apart once they have been combined, as well as manipulate the environment where all this takes place.
Since this is not happening completely procedurally the different sub-routines (called codelets) can compete. It is fascinating that anything actually gets done, however, and that the system does not break down and get stuck in an infinite loop. It isn’t hard to imagine that since these procedures are happening in some sort of random environment in pseudo-parallel they could counter-act each other and loop forever. Hypothetically, for example, a loop could form where one codelet multiplies two numbers together, and another breaks them apart again, looping continuously.
This doesn’t happen because of the clever way the system was designed. The urgency associated with each codelet prevents this. As each operation is repeated its urgency (and probability) exponentially declines towards zero.
This sort of design is ingenious. Not only that, but the pseudo-parallel design is notable as well. It has only been semi-recently that parallel computing, multiple processors, and threading has become such common system design elements. Today, most individual programs still execute only on one processor, so designing a program that runs within its own parallel confines decades ago definitely required a little extra thought.
Le compte est bon, literally meaning “the total is correct” is a game show that exists (or at least existed) in France. The idea is that a contestant is given five small numbers and must combine them using simple mathematics (addition, subtraction, and multiplication) in order to reach a goal number using as many of the numbers as they like (but each only once). To better understand, try your hand at this simple problem:
Numbers you can use: (8 3 9 10 7)
[British Television show “Countdown”, which shows a very similar problem].
Given this problem it is not very difficult to come to an answer. ((8*10)+7) = 87, or even ((9*10)-3) = 87. We work towards these answers with very little awareness of the deep cognitive processes that are taking place behind the scenes. This makes it difficult for psychologists and neuroscientists to say with absolute certainty how our mind works, and even more difficult for us to create true artificial intelligence (a dream some think will soon become a reality). The best we can do as Computer Scientists is to map out the inner-processes that we believe take place within the human brain.
This is what we are attempting to map out in CSC366 very soon (the course this weblog is for). We’ve already completed the creation of an exhaustive solver for these problems. This wasn’t very difficult. What this does is solve these problems in a way that is very natural for computers; looking through every possible path until an answer is found. However, this bares no resemblance to human cognition; rather, humans will utilize well-trained instinct, take intelligent guesses, or follow many other forms of rational thinking. Very rarely (if ever) do we sequentially search through all possible scenarios until we find a suitable answer.
Our goal, then, is to design a machine which performs, amongst other things, a terraced scan of all possible paths to take (much like the human brain [explained here]), and weed out paths that don’t seem to lead us anywhere. For instance, you knew that multiplying all the numbers together in the above problem wouldn’t get you very far, but did you give the process any thought? It’s likely that your subconscious brushed up against this thought and then disregarded it without you being aware. A great deal happens in the mind we aren’t aware of. This is the goal of designing a machine that can give us deep insight into the human mind. Not only will it give us skills that can aid us in building machines which operate on the same level as humans, but also provides us with a theoretical pathway for mapping out human cognition.
A spoonerism is the usually unintentional act of switching the first letters of two (or more) words. Reverend William Archibald Spooner, ex-warden of Oxford’s New College, was prone to this, hence his name was used as the etymological root of the word. Consider these spoonerisms which Reverend Spooner allegedly misspoke:
“The lord is a shoving leopard,”
and this one, which is a little more complicated, spoken when a student angered him:
“You have hissed all my mystery lectures, and were caught fighting a liar in the quad. Having tasted two worms, you will leave by the next town drain.”
How, then, can a machine make use of a spoonerism? Consider JUMBO, a machine who’s primary goal is to emulate human cognition for the goal of constructing viable English words. What happens when this machine creates nonsensical (but close) word choices? Instead of wasting all the effort and time required to construct these word choices (in a human-like fashion, don’t forget), committing these word’s internal syllables to voluntary spoonerisms can lead to dramatic results, and often to viable English words.
While this isn’t the only mechanic built into JUMBO to “intelligently backtrack” and rework a word choice, it may be it’s most interesting. Spoonerisms can be powerful, and at times, embarrassing. In fact, some psychologists might argue that these seemingly unintentional “slips of the tongue”, might be more than accidents after all (consider Freudian slips). In Reverend Spooner’s case, however, I am willing to argue they were nothing more than simple blunders.
Consider this post to be the spiritual successor of my previous post, “Probabilities Play an Important Role to Us.”
Probabilities and statistics are both increasingly prevalent and increasingly useful in computing as an undercarriage in artificial decision making (especially in the sub-field of robotics (see LAGR)). Like computers, the human mind makes decisions based on statistical information all the time. Hofstadter realized this a long time ago, so many of his studies are based around a statistical decision-making process. He designed a program called JUMBO which emulates the unconscious cognitive process of “deep perception”; which includes the unconscious act of simultaneously analyzing different outcomes to a scenario and weeding out irrelevant paths one might take. Rather than simply engineering programs which exhaustively, computationally search for possible solutions, Hofstadter preferred strong artificial intelligence: machines which solve problems in a human-like manner.
Probabilities are at the heart of his program JUMBO, which attempts to mimic cognition in order to solve seemingly trivial word puzzles. It “glues” together letters that make sense together in an attempt to make up possible English-like words. However, it has no prior knowledge of which words actually exist. Intentionally not including a dictionary of all the possible English words, he instead supplied JUMBO with a chart of how likely two (or three) letters are to being found next to each other in the English language, along with a strength value. So, the letter combination “sh” might have a strength of 8, while the combination “sq” might have a strength of 3.
Based on this stat chart, JUMBO glues together letters and forms new bundles of letters. These bundles also contain a probability of how likely they are to producing a viable word. Based on these probabilities it will continue to glue letters to other letters or letters to bundles until all the letters are used. Using these hypothetical words it proceeds to fill in the word puzzle (Jumble).
Hofstadter coins this concept of weeding out unlikely candidates a terraced scan. It operates much like how we weed out all the books we aren’t interested in when we visit the bookstore. First we start with a quick surface scan, then take those which seem interesting and perform a deeper scan. Hofstadter intentionally built his system to solve the Jumble word problems in this manner as an attempt to emulate human cognition. Hofstadter urges uncovering our unconscious processing which he coins “deep perception.” This processing happens so quickly we can’t notice it. He argues uncovering it can be a major breakthrough for artificial intelligence, despite some claims from the scientific community that it is a waste of time to explore.
Hofstadter’s Seek Whence program extrapolates data from number sequences in an attempt to seek out rules for describing them. This is the essence of the program; to find patterns and other sequences from an initial given string of integers. However, he notes that there can be many different ways of looking at these patterns and it is often the the most unique perspectives which are aesthetically pleasing to us. My first post covered an example of looking at the same sequence from two differing perspectives. Art, in a similar fashion, can often be derived the same way, by taking creative licenses, avoiding the obvious, and breaking the rules we were taught to follow.
Hofstadter explains this specifically, “Often, what makes a piece of art appealing is precisely the fact that it violates some normal, easy way of doing things” (Hofstadter, 1995). For example, consider the work of Jackson Pollock. He is known for “action” or splatter painting. His art as an expression is meant to recreate the event that took place when a piece was painted, and was a major derivation of the norm. Instead of following the typical rules that art should be a portrayal of real life objects or mimic it in some way, he created his splatter paintings as a by-product of the actual event that took place to create them. While seemingly random, the pieces are meant to convey the moments during their conception. While machines may be able to replicate the apparent randomness of such artwork, they would find it difficult to judge it based on its aesthetics, and impossible to replicate the entirely human process under which they were created.
[Lavender Mist by Jackson Pollock]
As Hofstadter points out, one of the more difficult concepts for artificial intelligence to capture is a sense of aesthetics and beauty. Hofstadter tried hard to create a machine which could emulate this. A machine which could take a concept and toy with it, but also preserve a sense of it its original being. A machine which could take care in noticing, as Hofstadter calls it, naturalness vs forcedness, or elegance vs clunkiness. These are all key elements in forming any kind of creative art, whether it be musical composition, product design, writing, or painting, and also play a major role in emotional intelligence and general problem solving. Hofstadter, unfortunately, was mostly unsuccessful in his endeavor. The question then is,will it ever be possible for us to recreate this sense of intuition with the current form of computing we have today, or, do we have to wait for some new technology before we can truly capture the intuitiveness of the human mind in a machine?