Epistemology Sequence, Part 5: Extension and Universality

One of the properties that you’d like a learning agent to have is that, if your old concepts work well, learning a new concept should extend your knowledge but not invalidate your old knowledge. Changes in your ontology should behave in a roughly predictable manner rather than a chaotic manner.  If you learn that physics works differently at very large or very small scales, this should leave classical mechanics intact at moderate scales and accuracies.

From a goal-based perspective, this means that if you make a desirable change in ontology — let’s say you switch from one set of nodes to a different set — and you choose the “best” map from one ontology to another, in something like the Kullback-Leibler-minimizing sense described here — then when you take preimages of your “utility functions” on the new ontology onto the old ontology, they come out mostly the same.  The best decision remains the best decision.

In the special case where the old ontology is just a subset of the new ontology, this means that the maps between them are a restriction and an extension.  (For example, if we restrict (a, b, c, d, e) to the first three coordinates, it’s just the identity operation on those coordinates, (a, b, c); and if we extend (a, b, c) to (a, b, c, d, e), again the map is the identity on the first three coordinates.)  What we’d like to say is that, when we add new nodes to our ontology, then the function that computes values on that ontology (the f in Part 3 of this sequence) extends to a new f on the new ontology, while keeping the same values on the old nodes.

For example; let’s say I have a regression model that predicts SAT scores as a result of a bunch of demographic variables. The “best” model minimizes the sum of squared errors. Sum of squared errors is my utility function.  Now, if I add a variable to my model, the utility function stays the same, it’s still “sum of squared errors”; so if adding that new variable changes the model but reduces the residuals, my old model “wants” to make the upgrade.  On the other hand, an ‘upgrade’ to my model that changes the utility function, like deciding to minimize the sum of squared errors plus the coefficient for the new variable, isn’t necessarily an improvement unless the “best” model by that criterion also shrinks the sum of squared errors relative to the original regression model.

From the goal-oriented perspective, the only changes you’d want to make to your ontology are those which, when “projected” onto the old ontology, have you making the same “optimal” choices.

(These statements still need to be made precise. There may be a conjecture in there but I haven’t specified it yet. The whole business smells like Hahn-Banach to me, but I could be entirely mistaken. The universality of neural nets might be relevant to showing that this kind of a “rational learner” is implementable with neural nets in the first place. )

Advertisements

2 thoughts on “Epistemology Sequence, Part 5: Extension and Universality

  1. let’s say I have a regression model that predicts SAT scores as a result of a bunch of demographic variables. If I add a new variable to my model, it might change the weights on the old variables; but it shouldn’t change the signs much.

    Sorry if I’m missing something, but–doesn’t it happen in practice all the time (i.e. Simpson’s Paradox)? Do you mean this in some stylized sense, or are you assuming you have really small residuals already, or what?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s