Deep Learning with Matlab: Big Data and Neural Networks |
|
Author:
| Vidales, A. |
ISBN: | 978-1-7929-2217-6 |
Publication Date: | Dec 2018 |
Publisher: | Independently Published
|
Book Format: | Paperback |
List Price: | USD $22.50 |
Book Description:
|
The treatment of large data requires the use of computational structures that implement parallelism and distributed computing. The Big Data structures are responsible for providing these characteristics to computing. The treatment of large data requires the use of computational structures that implement parallelism and distributed computing. The Big Data structures are responsible for providing these characteristics to computing. You can train a convolutional neural network (CNN,...
More DescriptionThe treatment of large data requires the use of computational structures that implement parallelism and distributed computing. The Big Data structures are responsible for providing these characteristics to computing. The treatment of large data requires the use of computational structures that implement parallelism and distributed computing. The Big Data structures are responsible for providing these characteristics to computing. You can train a convolutional neural network (CNN, ConvNet) or long short-term memory networks (LSTM or BiLSTM networks) using the trainNetwork function. You can choose the execution environment (CPU, GPU, multi-GPU, and parallel) using trainingOptions. Training in parallel, or on a GPU, requires Parallel Computing Toolbox.Neural networks are inherently parallel algorithms. Multicore CPUs, graphical processing units (GPUs), and clusters of computers with multiple CPUs and GPUs can take advantage of this parallelism. Parallel Computing Toolbox, when used in conjunction with Deep Learning Toolbox, enables neural network training and simulation to take advantage of each mode of parallelism. Distributed and GPU computing can be combined to run calculations across multiple CPUs and/or GPUs on a single computer, or on a cluster with MATLAB Distributed Computing Server.Parallel Computing Toolbox allows neural network training and simulation to run acrossmultiple CPU cores on a single PC, or across multiple CPUs on multiple computers on anetwork using MATLAB Distributed Computing Server.Using multiple cores can speed calculations. Using multiple computers can allow you tosolve problems using data sets too big to fi in the RAM of a single computer. The onlylimit to problem size is the total quantity of RAM available across all computers.To manage cluster configurations use the Cluster Profil Manager. You can train a convolutional neural network (CNN, ConvNet) or long short-term memory networks (LSTM or BiLSTM networks) using the trainNetwork function. You can choose the execution environment (CPU, GPU, multi-GPU, and parallel) using trainingOptions. Training in parallel, or on a GPU, requires Parallel Computing Toolbox.Neural networks are inherently parallel algorithms. Multicore CPUs, graphical processing units (GPUs), and clusters of computers with multiple CPUs and GPUs can take advantage of this parallelism. Parallel Computing Toolbox, when used in conjunction with Deep Learning Toolbox, enables neural network training and simulation to take advantage of each mode of parallelism. Distributed and GPU computing can be combined to run calculations across multiple CPUs and/or GPUs on a single computer, or on a cluster with MATLAB Distributed Computing Server. Parallel Computing Toolbox allows neural network training and simulation to run acrossmultiple CPU cores on a single PC, or across multiple CPUs on multiple computers on anetwork using MATLAB Distributed Computing Server.Using multiple cores can speed calculations. Using multiple computers can allow you tosolve problems using data sets too big to fi in the RAM of a single computer. The onlylimit to problem size is the total quantity of RAM available across all computers.To manage cluster configurations use the Cluster Profil Manager.