Up to now, in attempting to explain the meaning of any sentence, we automatically point to what the sentence says, usually in terms of some fancy conceptual or other cognitive semantic structure in the forms of logical formulas, graphical networks, or other kinds of schemata. But whatever form the “meaning” structure takes, it is just an expression in another language, albeit disguised in prettier clothing. What kind of explanation is this? The meaning of a sentence is its translation in another language? This only kicks the can down the road!
This is like saying that the meaning of a program written in C++ is its translation in machine language. But that is basically the same program expressed in a different format and we have gone nowhere, as far as explaining the meaning of the original program is concerned.
Then, what is the meaning of the C++ program anyway? Ask the programmer! The only reason he wrote the program is that he wants it executed. He wants to bring about the effects of the execution and that is the meaning of the program. Translating it into machine code may very well be a necessary step in the execution, but it is definitely not the meaning itself. If I tell you to “please sit down” and you understand it perfectly by having all the correct conceptual schemas in your mind but just don’t do it, that is not what I meant by telling you to “please sit down.”
Language utterances are instructions produced by the speaker (programmer) to be executed on the receiving agents (target machine) to produce experience (effects of execution), not information encoded by the sender to be extracted by the receiver to produce representations. Information exchange (informative effect) is just a side effect of the causal effects. With this in mind, maybe we can finally get somewhere in creating an AI that can understand us in our own languages.