Book 30

This book presents a powerful hybrid intelligent system based on fuzzy logic, neural networks, genetic algorithms and related intelligent techniques. The new compensatory genetic fuzzy neural networks have been widely used in fuzzy control, nonlinear system modeling, compression of a fuzzy rule base, expansion of a sparse fuzzy rule base, fuzzy knowledge discovery, time series prediction, fuzzy games and pattern recognition. This effective soft computing system is able to perform both linguistic-word-level fuzzy reasoning and numerical-data-level information processing. The book also proposes various novel soft computing techniques.

Book 32

This book is an introduction to pattern recognition, meant for undergraduate and graduate students in computer science and related fields in science and technology. Most of the topics are accompanied by detailed algorithms and real world applications. In addition to statistical and structural approaches, novel topics such as fuzzy pattern recognition and pattern recognition via neural networks are also reviewed. Each topic is followed by several examples solved in detail. The only prerequisites for using this book are a one-semester course in discrete mathematics and a knowledge of the basic preliminaries of calculus, linear algebra and probability theory.

Book 62

This book describes exciting new opportunities for utilizing robust graph representations of data with common machine learning algorithms. Graphs can model additional information which is often not present in commonly used data representations, such as vectors. Through the use of graph distance — a relatively new approach for determining graph similarity — the authors show how well-known algorithms, such as k-means clustering and k-nearest neighbors classification, can be easily extended to work with graphs instead of vectors. This allows for the utilization of additional information found in graph representations, while at the same time employing well-known, proven algorithms.To demonstrate and investigate these novel techniques, the authors have selected the domain of web content mining, which involves the clustering and classification of web documents based on their textual substance. Several methods of representing web document content by graphs are introduced; an interesting feature of these representations is that they allow for a polynomial time distance computation, something which is typically an NP-complete problem when using graphs. Experimental results are reported for both clustering and classification in three web document collections using a variety of graph representations, distance measures, and algorithm parameters.In addition, this book describes several other related topics, many of which provide excellent starting points for researchers and students interested in exploring this new area of machine learning further. These topics include creating graph-based multiple classifier ensembles through random node selection and visualization of graph-based data using multidimensional scaling.

Book 68

In graph-based structural pattern recognition, the idea is to transform patterns into graphs and perform the analysis and recognition of patterns in the graph domain — commonly referred to as graph matching. A large number of methods for graph matching have been proposed. Graph edit distance, for instance, defines the dissimilarity of two graphs by the amount of distortion that is needed to transform one graph into the other and is considered one of the most flexible methods for error-tolerant graph matching.This book focuses on graph kernel functions that are highly tolerant towards structural errors. The basic idea is to incorporate concepts from graph edit distance into kernel functions, thus combining the flexibility of edit distance-based graph matching with the power of kernel machines for pattern recognition. The authors introduce a collection of novel graph kernels related to edit distance, including diffusion kernels, convolution kernels, and random walk kernels. From an experimental evaluation of a semi-artificial line drawing data set and four real-world data sets consisting of pictures, microscopic images, fingerprints, and molecules, the authors demonstrate that some of the kernel functions in conjunction with support vector machines significantly outperform traditional edit distance-based nearest-neighbor classifiers, both in terms of classification accuracy and running time.

Book 71

This book addresses the task of processing online handwritten notes acquired from an electronic whiteboard, which is a new modality in handwriting recognition research. The main motivation of this book is smart meeting rooms, aim to automate standard tasks usually performed by humans in a meeting.The book can be summarized as follows. A new online handwritten database is compiled, and four handwriting recognition systems are developed. Moreover, novel preprocessing and normalization strategies are designed especially for whiteboard notes and a new neural network based recognizer is applied. Commercial recognition systems are included in a multiple classifier system. The experimental results on the test set show a highly significant improvement of the recognition performance to more than 86%.

Book 76

This book introduces SpecDB, an intelligent database created to represent and host software specifications in a machine-readable format, based on the principles of artificial intelligence and unit testing database operations. SpecDB is demonstrated via two automated intelligent tools. The first automatically generates database constraints from a rule-base in SpecDB. The second is a reverse engineering tool that logs the actual execution of the program from the code.

Book 77

This book is concerned with a fundamentally novel approach to graph-based pattern recognition based on vector space embedding of graphs. It aims at condensing the high representational power of graphs into a computationally efficient and mathematically convenient feature vector.This volume utilizes the dissimilarity space representation originally proposed by Duin and Pekalska to embed graphs in real vector spaces. Such an embedding gives one access to all algorithms developed in the past for feature vectors, which has been the predominant representation formalism in pattern recognition and related areas for a long time.

Book 89

In recent years, libraries and archives all around the world have increased their efforts to digitize historical manuscripts. To integrate the manuscripts into digital libraries, pattern recognition and machine learning methods are needed to extract and index the contents of the scanned images.The unique compendium describes the outcome of the HisDoc research project, a pioneering attempt to study the whole processing chain of layout analysis, handwriting recognition, and retrieval of historical manuscripts. This description is complemented with an overview of other related research projects, in order to convey the current state of the art in the field and outline future trends.This must-have volume is a relevant reference work for librarians, archivists and computer scientists.