Anyone working on artificial intelligence (AI) and natural language processing (NLP) is already acquainted with the Chinese Room experiment. Imagine a non-Chinese-speaking person inside a room with English rules and instructions on how to manipulate Chinese symbols. Outside the door, a native speaker slides under it a letter in Chinese, and the man following the instructions will come up with a response according to the rules and instructions and pass it back. Proponents of neural networks for artificial intelligence argue that the person inside the room may appear to be a native speaker because the person was able to come up with a proper response even though, in actuality, the person doesn’t understand Chinese. Their main concern is that there is no experience of consciousness, therefore there is no understanding of the meaning and intention from the interaction. The scientist behind the experiment wanted to prove that anything passed to the machine it couldn’t comprehend but would only provide a programmed response back indistinguishable from how a human would give a response. But isn’t that the goal?
What essentially constitutes understanding in human-machine communication? This is the question that should be raised primarily. How do we communicate with each other, and how could it be further improved by AI? Human-human communication is generally done through speech and non-verbal cues. We communicate verbally. We speak to each other to express our thoughts. It is not limited to speaking but also in written form. Information can be passed on and communicated by writing down words and sentences. We can also communicate visually. Simply by using pictures, ideas are communicated. Though simplified in those categories, communication is also dynamic and complex. There are subtle nuances in human communication. Verbally, we need to be mindful of the tone and intonation of our voice when we speak. We also need to consider if it’s literal or figurative and in what context it was spoken. Conversing with a fellow human is at the same time simple and complicated. Keeping those complexities in mind, is there a better way for us to communicate with a machine?
Various levels of ambiguity to consider:
– Word-level ambiguity (contracted forms)
– Sentence-level ambiguity (figures of speech or idioms)
– Meaning-level ambiguity (words with several different meanings)
Can a machine be programmed to recognize verbal nuances in the voice of humans and avoid misinterpretation? We’ve been too entertained by and preoccupied with the current fad in AI development, making machines conscious like humans, as in the movies. As previously mentioned, the primary goal of development is not consciousness but to endow humans with a tool-creating tool. It doesn’t necessarily have to be conscious for a machine to be accurate. If the objective is to have a definitive interface between humans and machines, human-computer communication should be improved. Based on the current models being utilized for AI machine computing, English is generally the language used and somehow creates a chokepoint in machine parsing. For this reason, we should consider adapting a formally constructed language. It would be a simpler and more dependable means for computer processing and would be more neutral in integrating different linguistics.
Advantages of constructed languages:
– precisely express complex logical constructions
– unambiguous in spelling and grammar
– culturally neutral
– methodologically learnable
– conveys and communicates emotional context
There are languages designed and invented based on logic (e.g., Lojban). Their grammar is designed after simple rules, and their linguistic features are inspired by predicate logic. These have many purposes; primarily, the inventors wanted to improve human-human communication by reducing ambiguity and improving expression in languages. Also, they are machine-parsable, so the syntactic structure and validity of a sentence are unambiguous and can be analyzed using computer tools. For that reason, implementing this is significant for human-machine communication, and could be used as the top layer of any AI.
This demystifies the ambiguities of conventional language. Tapping on its keyword system to provide the base makes Valmiz Aurora even more powerful and now functions as a tool that will provide bi-directional communication that ensures the messages conveyed match the user’s intentions. The keyword here is a task-specific tool. Augmentive artificial intelligence is here. Let us set up an introductory meeting so that we could present a tailored solution for your organization.
Last but not least, to answer the question of importance, users will be allowed to openly express their prompts, queries, and commands that are then parsed by the machine and perform exactly as it was told to do without risking serious machine misinformation. Remember the Chinese Room experiment? Imagine if the letter is in a constructed language (easily parsable by the machine with no ambiguities) and the rules and instructions that were provided to the man inside are exhaustive, defined, and accurate. The result will be immaculately accurate.