[1] Bialasiewicz J.:
Statistical data reduction via construction of sample space partitions. Kybernetika 6 (1970), 6, 371–379
MR 0283910 |
Zbl 0218.94005
[2] Dempster A. P., Laird N. M., Rubin D. B.:
Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Statist. Soc. B 39 (1977), 1–38
MR 0501537 |
Zbl 0364.62022
[3] Grim J.:
On numerical evaluation of maximum-likelihood estimates for finite mixtures of distributions. Kybernetika 18 (1982), 3, 173–190
MR 0680154 |
Zbl 0489.62028
[4] Grim J.:
Design and optimization of multilevel homogeneous structures for multivariate pattern recognition. In: Fourth FORMATOR Symposium 1982, Academia, Prague 1982, pp. 233–240
MR 0726960
[5] Grim J.: Multivariate statistical pattern recognition with non-reduced dimensionality. Kybernetika 22 (1986), 6, 142–157
[6] Grim J.: Maximum-likelihood design of layered neural networks. In: Proc. Internat. Conference Pattern Recognition. IEEE Computer Society Press, Los Alamitos 1996, pp. 85–89
[7] Grim J.: Design of multilayer neural networks by information preserving transforms. In: Third European Congress on Systems Science (E. Pessa, M. P. Penna, and A. Montesanto, eds.). Edizioni Kappa, Roma 1996, pp. 977–982
[8] Grim J.: Information approach to structural optimization of probabilistic neural networks. In: Fourth European Congress on Systems Science (L. Ferrer and A. Caselles, eds.). SESGE, Valencia 1999, pp. 527–539
[9] Grim J.: Discretization of probabilistic neural networks with bounded information loss. In: Computer–Intensive Methods in Control and Data Processing. (Preprints of the 3rd European IEEE Workshop CMP’98, Prague 1998, J. Rojicek et al., eds.), ÚTIA AV ČR, Prague 1998, pp. 205–210
[10] Grim J.: A sequential modification of EM algorithm. In: Proc. Classification in the Information Age (W. Gaul and H. Locarek-Junge, eds., Studies in Classification, Data Analysis, and Knowledge Organization), Springer, Berlin 1999, pp. 163–170
[11] J. J. Grim: Self-organizing maps and probabilistic neural networks. Neural Network World 10 (2000), 3, 407–415
[12] Grim J.: Probabilistic Neural Networks (in Czech). In: Umělá inteligence IV. (V. Mařík, O. Štěpánková, and J. Lažanský, eds.), Academia, Praha 2003, pp. 276–312
[13] Grim J., Just, P., Pudil P.: Strictly modular probabilistic neural networks for pattern recognition. Neural Network World 13 (2003), 6, 599–615
[14] Grim J., Kittler J., Pudil, P., Somol P.: Combining multiple classifiers in probabilistic neural networks. In: Multiple Classifier Systems (Lecture Notes in Computer Science 1857, J. Kittler and F. Roli, eds.). Springer, Berlin 2000, pp. 157–166
[15] Grim J., Kittler J., Pudil, P., Somol P.:
Information analysis of multiple classifier fusion. In: Multiple Classifier Systems 2001 (Lecture Notes in Computer Science 2096, J. Kittler and F. Roli, eds.). Springer, Berlin – New York 2001, pp. 168–177
MR 2043268 |
Zbl 0987.68898
[16] Grim J., Kittler J., Pudil, P., Somol P.:
Multiple classifier fusion in probabilistic neural networks. Pattern Analysis & Applications 5 (2002), 7, 221–233
MR 1930448 |
Zbl 1021.68079
[17] Grim J., Pudil, P., Somol P.: Recognition of handwritten numerals by structural probabilistic neural networks. In: Proc. Second ICSC Symposium on Neural Computation (H. Bothe and R. Rojas, eds.). ICSC, Wetaskiwin 2000, pp. 528–534
[18] Grim J., Pudil, P., Somol P.: Boosting in probabilistic neural networks. In: Proc. 16th International Conference on Pattern Recognition (R. Kasturi, D. Laurendeau and C. Suen, eds.). IEEE Computer Society, Los Alamitos 2002, pp. 136–139
[19] Grim J., Somol P., Pudil, P., Just P.: Probabilistic neural network playing a simple game. In: Artificial Neural Networks in Pattern Recognition (S. Marinai and M. Gori, eds.). University of Florence, Florence 2003, pp. 132–138
[20] Grim J., Somol, P., Pudil P.: Probabilistic neural network playing and learning Tic-Tac-Toe. Pattern Recognition Letters, Special Issue 26 (2005), 12, 1866–1873
[21] Haykin S.:
Neural Networks: A Comprehensive Foundation. Morgan Kaufman, San Mateo 1993
Zbl 0934.68076
[23] Perez A.:
Information, $\varepsilon $-sufficiency and data reduction problems. Kybernetika 1 (1965), 4, 297–323
MR 0205410
[24] Perez A.:
$\varepsilon $-admissible simplification of the dependence structure of a set of random variables. Kybernetika 13 (1977), 6, 439–449
MR 0472224
[25] Schlesinger M. I.: Relation between learning and self-learning in pattern recognition (in Russian). Kibernetika (1968), 6, 81–88
[26] Specht D. F.: Probabilistic neural networks for classification, mapping or associative memory. In: Proc. IEEE Internat. Conference on Neural Networks 1988, Vol. I, pp. 525–532
[27] Streit L. R., Luginbuhl T. E.: Maximum-likelihood training of probabilistic neural networks. IEEE Trans. Neural Networks 5 (1994), 764–783
[28] Vajda I., Grim J.:
About the maximum information and maximum likelihood principles in neural networks. Kybernetika 34 (1998), 4, 485–494
MR 0359208
[29] Watanabe S., Fukumizu K.: Probabilistic design of layered neural networks based on their unified framework. IEEE Trans. Neural Networks 6 (1995), 3, 691–702
[30] Xu L., Jordan M. I.: On convergence properties of the EM algorithm for Gaussian mixtures. Neural Computation 8 (1996), 129–151