1989 | | | **Trade-Off Among Parameters Affecting Inductive Inference** - *R. Freivalds, C. H. Smith and M. Velauthapillai* |

| | | **A greedy method for learning -DNF functions under the uniforn distribution** - *G. Pagallo and D. Haussler* |

| | | **LT Revisited: Explanation-Based Learning and the Logic of Principia Mathematica** - *Paul O’Rorke* |

| | | **Identifying $-decision trees and $-formulas with constrained instance queries** - *T. Hancock* |

| | | **The equivalence and learning of probabilistic automata** - *W. Tzeng* |

| | | **Identification of unions of languages drawn from an identifiable class** - *K. Wright* |

| | | **Learning Decision Trees from Random Examples** - *A. Ehrenfeucht and D. Haussler* |

| | | **Efficient Specialization of Relational Concepts** - *Kurt Vanlehn* |

| | | **Generalizing the PAC model: sample size bounds from metric dimension-based uniform convergence results** - *D. Haussler* |

| | | **Polynomial learning of semilinear sets** - *N. Abe* |

| | | **Learning Conjunctive Concepts in Structural Domains** - *D. Haussler* |

| | | **Fast Learning in Networks of Locally-Tuned Processing Units** - *J. Moody and C. Darken* |

| | | **On learning from exercises** - *B. K. Natarajan* |

| | | **Reliable and useful learning** - *J. Kivinen* |

| | | **Inductive Inference From Good Examples** - *R. Freivalds, E. B. Kinber and R. Wiehagen* |

| | | **Constant depth circuits, Fourier transform, and learnability** - *N. Linial, Y. Mansour and N. Nisan* |

| | | **The CN2 Induction Algorithm** - *Peter Clark and Tim Niblett* |

| | | **Learning structure from data: a survey** - *J. Pearl and R. Dechter* |

| | | **A Reconfigurable Analog VLSI Neural Network Chip** - *H. P. Graf, S. Satyanarayana and Y. Tsividis* |

| | | **Recursion Theoretic Characterizations of Language Learning** - *S. Jain and A. Sharma* |

| | | **Efficient NC algorithms for set cover with applications to learning and geometry** - *B. Berger, J. Rompel and P. W. Shor* |

| | | **Erratum one** - *Authorless* |

| | | **Learning Arm Kinematics and Dynamics** - *C. G. Atkeson* |

| | | **Computational Learning Theory: New Models and Algorithms** - *R. H. Sloan* |

| | | **On Learning Sets and Functions** - *B. K. Natarajan* |

| | | **Learning Faster than Promised by the Vapnik-Chervonenkis Dimension** - *A. Blumer and N. Littlestone* |

| | | **When Will Machines Learn?** - *Douglas B. Lenat* |

| | | **Refined Query Inference** - *E. B. Kinber and T. Zeugmann* |

| | | **Stochastic Complexity in Statistical Inquiry** - *J. Rissanen* |

| | | **Can Machine Learning Offer Anything to Expert Systems?** - *Bruce G. Buchanan* |

| | | **Learnability and the Vapnik-Chervonenkis Dimension** - *A. Blumer, A. Ehrenfeucht, D. Haussler and M. K. Warmuth* |

| | | **Probably-Approximate Learning over Classes of Distributions** - *B. K. Natarajan* |

| | | **On the complexity of learning from counterexamples** - *W. Maass and G. Turán* |

| | | **Induction from the general to the more general** - *K. T. Kelly* |

| | | **Coping with uncertainty in map learning** - *K. Basye, T. Dean and J. Vitter* |

| | | **Neural networks, principle components, and subspaces** - *E. Oja* |

| | | **Consistent inference of probabilities in layered networks: predictions and generalizations** - *N. Tishby, E. Levin and S. Solla* |

| | | **On the role of search for learning** - *S. A. Kurtz and C. H. Smith* |

| | | **The Vapnik-Chervonenkis Dimension: Information verses Complexity in Learning** - *Y. S. Abu-Mostafa* |

| | | **On the error probabilty of boolean concept descriptions** - *F. Bergadano and L. Saitta* |

| | | **Toward a Unified Science of Machine Learning** - *P. Langley* |

| | | **Equivalence queries and approximate fingerprints** - *D. Angluin* |

| | | **Approximation by Superpositions of a Sigmoidal Function** - *G. Cybenko* |

| | | **Elementary formal system as a unifying framework for language learning** - *S. Arikawa, T. Shinohara and A. Yamamoto* |

| | | **A polynomial-time algorithm for learning k-variable pattern languages from examples** - *M. Kearns and L. Pitt* |

| | | **The World Would Be a Better Place if Non-Programmers Could Program** - *John McDermott* |

| | | **On Metric Entripy, Vapnik-Chervonenkis Dimension, and Learnability for a Class of Distributions** - *S. Kulkarni* |

| | | **Representation Propoerties of Networks: Kolmogorov’s Theorm Is Irrelevant** - *T. Poggio and F. Girosi* |

| | | **Fast Learning in Multi-Resolution Hierarchies** - *J. Moody* |

| | | **Semi-Supervised Learning** - *R. A. Board and L. Pitt* |

| | | **Towards representation independence in PAC learning** - *M. Warmuth* |

| | | **An Empirical Comparison of Selection Measures for Decision-Tree Induction** - *John Mingers* |

| | | **The light bulb problem** - *R. Paturi, S. Rajasekaran and J. Reif* |

| | | **Learning nested differences of intersection-closed concept classes** - *D. Helmbold, R. Sloan and M. K. Warmuth* |

| | | **On Characterizing and Learning Some Classes of Read-once Functions** - *L. Hellerstein* |

| | | **A Critique of the Valiant Model** - *W. Buntine* |

| | | **From on-line to batch learning** - *N. Littlestone* |

| | | **Planning and learning in permutation groups** - *A. Fiat, S. Moses, A. Shamir, I. Shimshoni and G. Tardos* |

| | | **An Empirical Comparison of Pruning Methods for Decision Tree Induction** - *John Mingers* |

| | | **Knowledge of an Upper Bound on Grammar Size Helps Language Learning** - *S. Jain and A. Sharma* |

| | | **Training a 3-node neural net is NP-Complete** - *A. Blum and R. L. Rivest* |

| | | **Polynomial Learnability as a Formal Model of Natural Language Acquisition** - *Naoki Abe* |

| | | **Task-Structures, Knowledge Acquisition and Learning** - *B. Chandrasekaran* |

| | | **Training sequences** - *D. Angluin, W. Gasarch and C. Smith* |

| | | **Knowledge Acquisition for Knowledge-Based Systems: Notes on the State-of-the-Art** - *John H. Boose and Brian R. Gaines* |

| | | **Conceptual Clustering, Categorization, and Polymorphy** - *Stephen José Hanson and Malcolm Bauer* |

| | | **Incremental Induction of Decision Trees** - *Paul E. Utgoff* |

| | | **Supporting Start-to-Finish Development of Knowledge Bases** - *Ray Bareiss, Bruce W. Porter and Kenneth S. Murray* |

| | | **A parametrization scheme for classifying models of learnability** - *S. Ben-David, G. M. Benedek and Y. Mansour* |

| | | **Automated Support for Building and Extending Expert Models** - *Mark A. Musen* |

| | | **Learning simple deterministic languages** - *H. Ishizaka* |

| | | **Automated Knowledge Acquisition for Strategic Knowledge** - *Thomas R. Gruber* |

| | | **Optimal unsupervised learning in a single-layer linear feedforward neural network** - *T. D. Sanger* |

| | | **Performance of a Stochastic Learning Chip** - *J. Alspector and R. B. Allen* |

| | | **An application of minimum description length principle to online r ecognition of handprinted alphanumerals** - *Q. Gao and M. Li* |

| | | **Space-bounded learning and the Vapnik-Chervonenkis dimension** - *S. Floyd* |

| | | **Editorial one** - *Jaime G. Carbonell* |

| | | **Deterministic Boltzmann Learning Performs Steepest Descent in Weight-Space** - *G. E. Hinton* |

| | | **Identifying decision trees with equivalence queries** - *T. Hancock* |

| | | **Learnability and Linguistic Theory** - *R. J. Matthews and W. Demopoulos* |

| | | **A Heuristic Approach to the Discovery of Macro-operators** - *Glenn A. Iba* |

| | | **A general lower bound on the number of examples needed for learning** - *A. Ehrenfeucht, D. Haussler, M. Kearns and L. G. Valiant* |

| | | **The Knowledge Level Reinterpreted: Modeling How Systems Interact** - *William J. Clancey* |

| | | **Bounding sample size with the Vapnik-Chervonenkis dimension** - *J. Shawe-Taylor, M. Anthony and R. L. Biggs* |

| | | **Learning read-once formulas using membership queries** - *L. Hellerstein and M. Karpinski* |

| | | **Monte-Carlo Inference and its Relations to Reliable Frequency Identification** - *E. B. Kinber and T. Zeugmann* |

| | | **Cryptographic limitations on learning Boolean formulae and finite automata** - *M. Kearns and L. G. Valiant* |

| | | **Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms** - *N. Littlestone* |

| | | **What Size Net Gives Valid Generalization?** - *E. Baum and D. Haussler* |

| | | **Learning Automata - An Introduction** - *K. S. Narendra and M. A. L. Thathachar* |

| | | **Complexity issues in learning by neural nets** - *J. Lin and J. S. Vitter* |

| | | **A Statistical Approach to Learning and Generalization in Neural Networks** - *E. Levin, N. Tishby and S. Solla* |

| | | **Synthetic Neural Modelling: Comparisons of Population and Connectionist Approaches** - *Jr G. N. Reeke, O. Sporns and G. M. Edelman* |

| | | **A Study of Explanation-Based Methods for Inductive Learning** - *Nicholas S. Flann and Thomas G. Dietterich* |

| | | **Some Results on Learning** - *B. K. Natarajan* |

| | | **Proc. 2nd Annu. Workshop on Comput. Learning Theory** - *R. Rivest and D. Haussler and M. K. Warmuth* |

| | | **Probabilistic Inductive Inference** - *L. Pitt* |

| | | **Using queries to identify -formulas** - *D. Angluin* |

| | | **Inductive inference with bounded number of mind changes** - *M. Velauthapillai* |

| | | **Regressiveness** - *M. Fulk* |

| | | **Adaptive Neural Networks Using MOS Charge Storage** - *D. B. Schwartz, R. E. Howard and W. E. Hubbard* |

| | | **Synergy of clustering multiple backpropagation networks** - *N. Lincoln and J. Skrzypek* |

| | | **Informed parsimonious inference of prototypical genetic sequences** - *A. Milosavljevi’c, D. Haussler and J. Jurka* |

| | | **Genetic Algorithms in Search, Optimization, and Machine Learning** - *D. E. Goldberg* |

| | | **News and Notes first of 89** - *Thomas G. Dietterich* |

| | | **News and Notes second of 89** - *T. G. Dietterich* |

| | | **A Parallel Network that Learns to Play Backgammon** - *G. Tesauro and T. J. Sejnowski* |

| | | **A theory of learning simple concepts under simple distributions and average case complexity for the universal distribution** - *M. Li and P. M. B. Vitanyi* |

| | | **On approximate truth** - *D. N. Osherson, M. Stob and S. Weinstein* |

| | | **Editorial two** - *Jaime G. Carbonell* |

| | | **Convergence to nearly minimal size grammars by vacillating learning machines** - *S. Jain, A. Sharma and J. Case* |

| February | | **Approximation of Boolean functions by sigmoidal networks: Part I: XOR and other two-variable functions** - *E. K. Blum* |

| | | **Backpropagation Can Give Rise to Spurious Local Minima Even for Networks without Hidden Layers** - *E. D. Sontag and H. J. Sussmann* |

| March | | **Inferring Decision Trees Using the Minimum Description Length Principle** - *J. R. Quinlan and R. L. Rivest* |

| May | | **The Use of Artificial Neural Networks for Phonetic Recognition** - *H. C. Leung* |

| | | **Learning from Delayed Rewards** - *C. J. C. H. Watkins* |

| | | **The Computational Complexity of Machine Learning** - *M. Kearns* |

| | | **On the Computational Complexity of Training Simple Neural Networks** - *A. Blum* |

| | | **Back Propagation Fails to Separate Where Perceptrons Succeed** - *M. L. Brady, R. Raghavan and J. Slawny* |

| June | | **Finding Natural Clusters Through Entropy Minimization** - *R. S. Wallace* |

| | | **Neural Network Learning: Effects of Network and Training Set Size** - *N. Perugini* |

| | | **Tensor Manipulation Networks: Connectionist and Symbolic Approaches to Comprehension, Learning, and Planning** - *C. P. Dolan* |

| July | | **A ‘Neural’ Network that Learns to Play Backgammon** - *G. Tesauro and T. J. Sejnowski* |

| August | | **Learning in the Presence of Inaccurate Information** - *M. A. Fulk and S. Jain* |

| | | **Inductive Principles of the Search for Empirical Dependences Methods Based on Weak Convergence of Probability Measures** - *V. N. Vapnik* |

| | | **Accelerated Backpropagation Learning: Two Optimization Methods** - *R. Battiti* |

| September | | **A General Lower Bound on the Number of Examples Needed for Learning** - *A. Ehrenfeucht and D. Haussler* |

| | | **Made-up Minds: A Constructivist Approach to Artificial Intelligence** - *G. L. Drescher* |

| | | **Generalizing the PAC Model for Neural Net and Other Learning Applications** - *D. Haussler* |

| October | | **An Experimental Comparison of Connectionist and Conventional Classification Systems on Natural Data** - *P. C. Woodland and S. G. Smyth* |

| | | **Networks and the Best Approximation Property** - *T. Poggio and F. Girosi* |

| | | **Inductive Inference, DFAs, and Computational Complexity** - *L. Pitt* |

| | | **Towards Representation Independence in PAC-learning** - *M. K. Warmuth* |

| | | **Inductive Inference from Theory Laden Data** - *K. T. Kelly and C. Glymour* |

| November | | **Discovering the Structure of a Reactive Environment by Exploration** - *M. C. Mozer and J. Bachrach* |

| | | **Learnability in the Presence of Classification Noise** - *Y. Sakakibara* |

| December | | **Space-bounded learning and the Vapnik-Chervonenkis Dimension Ph.D** - *S. Floyd* |