Case in point. Let say I have two features, f1 and f2, that I believe I can use to measure some third feature f3. I have a sneaky suspicion that f1 and f2 aren't independent of each other, but I don't have a good model as to what's really going on. I asked one of my co-workers for some guidance, and he suggested viewing this as a Bayes net. He drew f1 influencing f2 and f3 (ie, feature f1 is a "true" feature and f2 is some "derived" feature from f1). He put a weight on each directed edge (the correlation coefficient) and went on to calculate how probable each configuration of the world was (let's suppose that all the features are binary valued, so there's 8 total features). This is where I start to lose things. A weight on an edge is a correlation coefficient? These edges have weights? I thought a Bayes net just showed which variables influenced which others and let us encode the distribution of values compactly in several CPTs. Instead, my coworker's drawn a table with every possible configuration of the world (all 8 of them) and assigned each a probability. Where'd these come from?
Clearly, I'm missing something fundamental. I think perhaps I need to spend some quality time with a good statistics book, and I have a few at hand to look through. Unfortunately, it also doesn't help that I'm used a very different pedagogical style than my co-worker can provide, so I'm a bit at a loss to understand what he's trying to say. Oh, well.
In any event, I've computed some statistics I do understand, namely, means and 95% confidence intervals, and I hope that they'll be enough to satisfy the people who were asking for them. I guess I'll find that out tomorrow.