After writing down the title, I seemed to have forgotten what I wanted to say *____*!!!
OK, the idea came back to my mind again.
When Dr. CHEN was making a presentation on Leo Breiman’s paper “SOME INFINITY THEORY FOR PREDICTOR ENSEMBLES”, I suddenly got a further understanding on boosting and bagging (though I still don’t understand some details). I believe there are some very very important issues about the philosophy of machine learning, which are actually hardly emphasized (or even mentioned) in Chinese books or papers I ever read.
Take boosting for example: I just describe this algorithm as “Good Good Study, Day Day Up” because it really “studies” from the past results to improve future classifiers.
That’s where the machine is “learning”, I think.