Ok, so I've further formalized my theory of grammar and I need to hear a coders perspective on it. Here is the equation
u = unit of knowledge
o = any occurrence
t = time
y = any positive whole number including zero
m = meaning
u = y(o)
t = y(u)
m = P(u|t)
Plainly stated, these are the axioms.
1. A unit of knowledge is equal to any sequence of occurrences
2. Time is equal to a collection of knowledge (any sequence of units)
3. Meaning is equal to the probability of any unit of knowledge given time.
This tries to account for the pragmatics of language. The computer will need to be able to process what is on the screen and save the occurrences as some arbitrarily named unit of knowledge. It will also need to process the coordinates of any click that occurs on the screen. Hopefully, we can associate the clicks with units of knowledge (ex. so we can click on the firefox icon no matter where it is). The input will be any process or image. The output will be the most probable unit of knowledge (or sequence of occurrences).
An example of the data would be this.
o1 = "John"
o2 = the figure of John
o3 = John moving his head
u1 = o1+o2+o3
u2 = o1+o2
u3 = o3
u4 = o1
u5 = o2
u6 = o3
The knowledge needs to be broken down into as many possible combinations of the occurrences because knowledge can consist of more knowledge.
t = u1
m = P(u1|t) = 100%
So now that we have one experience in our database (Someone saying John, and the figure of John, and his reaction), we can use that to prompt knowledge. I will show how in this next example.
input = u5
t = u5
m = P(u2|t) = 50%
Since u2 is the most probable given t(u5), prompt u2.
Maybe a coder can see the math of this better than I can. My question is, is this possible given the current state of computation. Is there anything I should be aware of?