linkedin facebook twitter rss

19 Oct Think Before You Speak?

How closely is your brain connected to your mouth? Please don’t answer that. I want to blog about it for awhile so hold the thought. There is a bunch of electrical activity in the brain around organizing concepts into context, and a bunch more around putting your thoughts into words. This organizing and putting may be described by scientists as schemes or schemata. Roger Schank worked to tie concepts and language together with semantic networks and John Sowa took the scheme a step further with conceptual structures.

Correlations between perceptual and linguistic schemata are, in the words of Miller and Johnson-Laird, “mediated by an enormously complex conceptual structure.” As these two researchers point out, words and percepts are the avenues into and out of a conceptual structure built around space, time, and causation. “Any theory of the relation between perception and language must necessarily be a theory of conceptual thought” (Miller, 1976, p. vii).

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references

  

Section 6 #12

Language Section Icon

 

Table of Context

ConversationDoin’ What Comes Naturally?

The science of linguistics is the subject of posts in this section of the blog. It is close to the heart of the author, himself a linguist. One point of clarification is called for here: Natural Language refers to communication systems used daily by all humans. These communication systems include such famous languages as English, Dansk, Swahili, American Sign Language, Bahasa Indonesia, Deutch, 日本語, العربية and many others that are less famous. Natural Language should never be confused with artificial and often counterintuitive synthetic languages such as Fortran and C++. There is also a link between artificial language and cognition, but that is a subject for another day.

Language is a critical subject in the context of AI because our human communicative skills are truly amazing. What if we could teach computers to interpret our communication as we speak or sign, naturally, without the interposition of a keyboard and a clumsy interface like DOS or UNIX? The ability to interpret would certainly be intelligent, making such a machine a major contender to pass the Turing test.

Remember the Turing test? The Turing test blindfolds the tester and asks her to determine whether the output of a machine, typically in answer to a question, comes from a machine or a person. Because computers can store so much information, we may need a new test for computers, assuming we can ever teach them to speak and interpret real communications. In the new test, we will know that it is a computer instead of a person if it knows too much to possibly be human. This doesn’t necessarily imply intelligence, as eloquently shown by John Searle: “‘Could a machine think?’ On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking”  (Searle 1980).

The machine (or program) that both knows that much and understands what you say using imperfect and incomplete human communication, will need to use fuzzy, brain-like techniques and conceptually structured knowledge: in our case, using a language-centered ontology.

Will context and meta-knowledge make machines or programs truly intelligent? I can’t say for sure. In upcoming posts, I will describe how the complex, stratified problem of language understanding and the multi-faceted science of artificial intelligence converge to provide good opportunities for progress. And I think it is a good idea to think before you speak.

Click below to look in each Understanding Context section


 

 

Comments are closed.