> Low rank does not necessarily mean "understanding", it's just a local minimum in some function of a random variable. "Understanding" would entail a (causal) model of underlying physiological problems and the symptoms they give rise to, with a recipe for inferring in reverse. That's exactly why I would not call that understanding, especially if they have to memorize a book full of ailments and symptoms. If the number of possible ailments is large, their "understanding" is not low rank, it's the opposite. > Conversely, some doctors have an encyclopedic knowledge of various ailments, and either memorize or infer from experience a particular ailment from a huge collection of possible ailments. Wingspan = chest width + 2 * arm length and BMI = weight/height^2 could be composite features "discovered" by such a model+inference toolkit. Eg: for predicting basketball success, might be a very useful "low rank" description, while for predicting obesity the relevant low rank description might be the body mass index (BMI) or something like it. Sure, but the appropriateness of a low rank approximation depends on what you want to predict. > Humans find and reject "low rank" correlations all the time. I appreciate the thought-provoking examples :-) Low rank does not necessarily mean "understanding", it's just a local minimum in some function of a random variable. People both like steak and cookies, and would conceivably fall into a "low rank" "humans like this category", but they couldn't be more dissimilar in many respects.Ĭonversely, some doctors have an encyclopedic knowledge of various ailments, and either memorize or infer from experience a particular ailment from a huge collection of possible ailments. For example, North America and South America are both part of the "new world", but no sociologist would use that to make general predictions about the people on the continent. Humans find and reject "low rank" correlations all the time. Unless some significant developments appear in this space (and they very well might), the best low-dimensional approximations will be the means, variances, and correlations computed by business analysts, which are tremendously valuable quantities to keep in mind at all times.ĭefinitely. Sometimes you find something, usually you just find a low-rank version of nothing in particular. > In my experience, at least, the value in this kind of activity is relatively low. Many people (including some experts, rightly or wrongly) believe that neural networks might be that model class, and (clever tweaks of) gradient descent might be an acceptable learner.Īll I'm saying is that the current hype about AI fails to separate the potential of the latter class from the currently available successful tools of the former class. To have a factorized low-dimensional formulation is roughly what it means to "understand" a subject, so such a toolkit would be enormously useful. Looking beyond, if we find a model+learner which can discover a low dimensional latent space (through certain indirectly specified biases) then that low dimensional formulation of the domain can guide us towards interesting questions worth asking. I agree with most of what you said, which IMHO comes under the spirit of supervised machine learning paradigm, which is the most advanced tool we have pragmatically available to use today. I think my point may not have gotten across clearly, so let me clarify.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |