微信扫一扫联系客服

微信扫描二维码

进入报告厅H5

关注报告厅公众号

165

电子书-神经网络和计算:学习算法和应用Neural Networks and Computing. Learning Algorithms and Applications (英)

# 计算机 # 网络学 # 神经网络学习算法 大小:2.36M | 页数:322 | 上架时间:2022-03-01 | 语言:英文

电子书-神经网络和计算:学习算法和应用Neural Networks and Computing. Learning Algorithms and Applications (英).pdf

电子书-神经网络和计算:学习算法和应用Neural Networks and Computing. Learning Algorithms and Applications (英).pdf

试看10页

类型: 电子书

上传者: 二一

出版日期: 2022-03-01

摘要:

Издательство Imperial College Press, 2007, -322 pp.The area of Neural computing that we shall discuss in this book represents a combination of techniques of classical optimization, statistics, and information theory. Neural network was once widely called artificial neural networks, which represented how the emerging technology was related to artificial intelligence. It once was a topic that had captivated the interest of most computer scientists, engineers, and mathematicians. Its charm of being an adaptive system or universal functional approximator has compelling appeal to most researchers and engineers. The Backpropagation training algorithm was once the most popular keywords used in most engineering conferences. There is an interesting history on this area dated back from the late fifties which we saw the advent of the Mark I Perceptron. But the real intriguing history started from the sixties that we saw Minsky and Papert’s book Perceptrons discredited the early neural research work. For all neural researchers, the late eighties are well remembered because the research of neural networks was reinstated and repositioned. From the nineties to the new millennium is history to be made by all neural researchers. We saw the flourish of this topic and its applications stretched from rigorous mathematical proof to different physical science and even business applications. Researchers now tend to use the term neural networks instead of artificial neural networks when we have understood the theoretical background more. There have been volumes of research literature published on the new development of neural theory and applications. There have been many attempts to discuss this topic from either a very mathematical way or a very practical way. But to most users including students and engineers, how to employ an appropriate neural network learning algorithm and the selection of model for a given physical problem appear to be the main issue.
This book, written from a more application perspective, provides thorough discussions on neural network learning algorithms and their related issues. We strive to find the balance in covering the major topics in neurocomputing, from learning theory, learning algorithms, network architecture to applications. We start the book from the fundamental building block neuron and the earliest neural network model, McCulloh and Pitts Model. In the beginning, we treat the learning concept from the wellknown regression problem which shows how the idea of data fitting can be used to explain the fundamental concept of neural learning. We employ an error convex surface to illustrate the optimization concept of learning algorithm. This is important as it shows readers that the neural learning algorithm is nothing more than a high dimensional optimization problem. One of the beauties of neural network is being a soft computing approach in which the selection of a model structure and initial settings may not have noticeable effect on the final solution. But neural learning process also suffers from its problem of being slow and stuck in local minima especially when it is required to handle a rather complex problem. These are the two main issues addressed in the later chapters of this book. We study the neural learning problem from a new perspective and offer several modified algorithms to enhance the learning speed and its convergence ability. We also show initializations of a network have significant effect on the learning performance. Different initialization methods are then discussed and elaborated.
Later chapters of the book deal with Basis function network, Self-Organizing map, and Feature Selection. These are interesting and useful topics to most engineering and science researchers. The Self-Organizing map is the most widely used unsupervised neural network. It is useful for performing clustering, dimensional reduction, and classification. The SOM is very different from the feedforward neural network in the sense of architecture and learning algorithm. In this book, we have provided thorough discussions and newly developed extended algorithms for readers to use. Classification and Feature selection is discussed in Chapter 6_. We include this topic in the book because bioinformatics has recently become a very important research area. Gene selection using computational methods, and performing cancer classification computationally have become the 21st Century research. This book provides a detail discussion on feature selection and how different methods be applied to gene selection and cancer classification. We hope this book will provide useful and inspiring information to readers. A number of software algorithms written in MATLAB are available for readers to use. Although the authors have gone through the book for few times checking typos and errors, we would appreciate readers notifying us about any typos found.Introduction
Learning Performance and Enhancement
Generalization and Performance Enhancement
Basis Function Networks for Classification
Self-organizing Maps
Classification and Feature Selection

Engineering Applications

Издательство Imperial College Press, 2007, -322 pp.我们将在本书中讨论的神经计算领域代表了经典优化、统计和信息理论的技术组合。神经网络曾经被广泛地称为人工神经网络,它代表了这项新兴技术与人工智能的关系。它曾经是一个吸引了大多数计算机科学家、工程师和数学家兴趣的话题。它作为一个自适应系统或通用函数近似器的魅力对大多数研究人员和工程师有着令人信服的吸引力。背向传播训练算法曾经是大多数工程会议中使用最多的关键词。这个领域有一段有趣的历史,可以追溯到50年代末,我们看到了Mark I Perceptron的出现。但真正耐人寻味的历史是从60年代开始的,我们看到Minsky和Papert的书Perceptrons使早期的神经研究工作信誉扫地。对于所有的神经研究者来说,八十年代末是值得纪念的,因为神经网络的研究被恢复并重新定位了。从九十年代到新千年是所有神经研究者要创造的历史。我们看到这一课题的蓬勃发展,其应用从严格的数学证明延伸到不同的物理科学,甚至商业应用。当我们对理论背景有了更多的了解后,研究人员现在倾向于使用神经网络这个术语,而不是人工神经网络。关于神经理论和应用的新发展,已经有大量的研究文献发表。有很多人试图从非常数学的方式或非常实用的方式来讨论这个话题。但对大多数用户包括学生和工程师来说,如何采用适当的神经网络学习算法和为给定的物理问题选择模型似乎是主要问题。

本书从更多的应用角度出发,对神经网络学习算法及其相关问题进行了详尽的讨论。我们努力在涵盖神经计算的主要课题中找到平衡点,从学习理论、学习算法、网络结构到应用。我们从基本构件神经元和最早的神经网络模型--McCulloh和Pitts模型开始本书。在开始时,我们从众所周知的回归问题来处理学习概念,这表明数据拟合的思想可以用来解释神经学习的基本概念。我们采用一个误差凸面来说明学习算法的优化概念。这一点很重要,因为它向读者表明,神经学习算法只不过是一个高维的优化问题。神经网络的优点之一是作为一种软计算方法,模型结构和初始设置的选择可能不会对最终解决方案产生明显的影响。但是,神经学习过程也存在缓慢和陷入局部最小值的问题,特别是当它需要处理相当复杂的问题时。这就是本书后面几章要解决的两个主要问题。我们从一个新的角度研究了神经学习问题,并提供了几个修改过的算法来提高学习速度和收敛能力。我们还表明网络的初始化对学习性能有很大影响。然后,我们讨论并阐述了不同的初始化方法。

本书后面几章涉及基数函数网络、自组织地图和特征选择。这些对大多数工程和科学研究人员来说都是有趣和有用的话题。自组织图是最广泛使用的无监督神经网络。它对于进行聚类、降维和分类很有用。SOM在结构和学习算法上与前馈神经网络有很大不同。在本书中,我们提供了详尽的讨论和新开发的扩展算法供读者使用。分类和特征选择在第6章_中讨论。我们在书中加入这个话题是因为生物信息学最近已经成为一个非常重要的研究领域。使用计算方法进行基因选择,以及通过计算进行癌症分类已经成为21世纪的研究。本书详细讨论了特征选择以及如何将不同的方法应用于基因选择和癌症分类。我们希望这本书能给读者提供有用的和有启发的信息。本书提供了一些用MATLAB编写的软件算法供读者使用。虽然作者已经把书中的错字和错误检查了几遍,但如果发现任何错字,我们还是希望读者能通知我们。

学习性能和增强功能

归纳和性能提升

分类的基函数网络

自组织图

分类和特征选择

工程应用

展开>> 收起<<

请登录,再发表你的看法

登录/注册

二一

相关文库

更多

浏览量

(100)

下载

(4)

收藏

分享

购买

5积分

0积分

原价5积分

VIP

*

投诉主题:

  • 下载 下架函

*

描述:

*

图片:

上传图片

上传图片

最多上传2张图片

提示

取消 确定

提示

取消 确定

提示

取消 确定

积分充值

选择充值金额:

30积分

6.00元

90积分

18.00元

150+8积分

30.00元

340+20积分

68.00元

640+50积分

128.00元

990+70积分

198.00元

1640+140积分

328.00元

微信支付

余额支付

积分充值

填写信息

姓名*

邮箱*

姓名*

邮箱*

注:填写完信息后,该报告便可下载

选择下载内容

全选

取消全选

已选 1