FinUniversity Electronic Library

     

Details

Expert insight.
Deep learning with TensorFlow and Keras [[electronic resource]]: build and deploy supervised,... unsupervised, deep, and reinforcement learning mod. — Third edition. — [S.l.]: PACKT PUBLISHING LIMITED, 2022. — 1 online resource (698 pages) : illustrations. — (Expert insight). — <URL:http://elib.fa.ru/ebsco/3402545.pdf>.

Record create date: 10/8/2022

Subject: Machine learning.; Artificial intelligence.; Neural networks (Computer science); Python (Computer program language)

Collections: EBSCO

Allowed Actions:

Action 'Read' will be available if you login or access site from another network Action 'Download' will be available if you login or access site from another network

Group: Anonymous

Network: Internet

Annotation

Deep Learning with TensorFlow and Keras teaches you neural networks and deep learning techniques using TensorFlow (TF) and Keras. You'll learn how to write deep learning applications in the most powerful, popular, and scalable machine learning stack available. TensorFlow 2.x focuses on simplicity and ease of use, with updates like eager execution, intuitive higher-level APIs based on Keras, and flexible model building on any platform. This book uses the latest TF 2.0 features and libraries to present an overview of supervised and unsupervised machine learning models and provides a comprehensive analysis of deep learning and reinforcement learning models using practical examples for the cloud, mobile, and large production environments. This book also shows you how to create neural networks with TensorFlow, runs through popular algorithms (regression, convolutional neural networks (CNNs), transformers, generative adversarial networks (GANs), recurrent neural networks (RNNs), natural language processing (NLP), and graph neural networks (GNNs)), covers working example apps, and then dives into TF in production, TF mobile, and TensorFlow with AutoML.

Document access rights

Network User group Action
Finuniversity Local Network All Read Print Download
Internet Readers Read Print
-> Internet Anonymous

Table of Contents

  • Cover
  • Copyright
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Neural Network Foundations with TF
    • What is TensorFlow (TF)?
    • What is Keras?
    • Introduction to neural networks
    • Perceptron
      • Our first example of TensorFlow code
    • Multi-layer perceptron: our first example of a network
      • Problems in training the perceptron and solution
      • Activation function: sigmoid
      • Activation function: tanh
      • Activation function: ReLU
      • Two additional activation functions: ELU and Leaky ReLU
      • Activation functions
      • In short: what are neural networks after all?
    • A real example: recognizing handwritten digits
      • One hot-encoding (OHE)
      • Defining a simple neural net in TensorFlow
      • Running a simple TensorFlow net and establishing a baseline
      • Improving the simple net in TensorFlow with hidden layers
      • Further improving the simple net in TensorFlow with dropout
      • Testing different optimizers in TensorFlow
      • Increasing the number of epochs
      • Controlling the optimizer learning rate
      • Increasing the number of internal hidden neurons
      • Increasing the size of batch computation
      • Summarizing experiments run to recognizing handwritten digits
    • Regularization
      • Adopting regularization to avoid overfitting
      • Understanding batch normalization
    • Playing with Google Colab: CPUs, GPUs, and TPUs
    • Sentiment analysis
      • Hyperparameter tuning and AutoML
    • Predicting output
    • A practical overview of backpropagation
    • What have we learned so far?
    • Toward a deep learning approach
    • Summary
    • References
  • Chapter 2: Regression and Classification
    • What is regression?
    • Prediction using linear regression
      • Simple linear regression
      • Multiple linear regression
      • Multivariate linear regression
    • Neural networks for linear regression
      • Simple linear regression using TensorFlow Keras
      • Multiple and multivariate linear regression using the TensorFlow Keras API
    • Classification tasks and decision boundaries
      • Logistic regression
      • Logistic regression on the MNIST dataset
    • Summary
    • References
  • Chapter 3: Convolutional Neural Networks
    • Deep convolutional neural networks
      • Local receptive fields
      • Shared weights and bias
      • A mathematical example
      • ConvNets in TensorFlow
      • Pooling layers
        • Max pooling
        • Average pooling
      • ConvNets summary
    • An example of DCNN: LeNet
      • LeNet code in TF
      • Understanding the power of deep learning
    • Recognizing CIFAR-10 images with deep learning
      • Improving the CIFAR-10 performance with a deeper network
      • Improving the CIFAR-10 performance with data augmentation
      • Predicting with CIFAR-10
    • Very deep convolutional networks for large-scale image recognition
      • Recognizing cats with a VGG16 net work
      • Utilizing the tf.Keras built-in VGG16 net module
      • Recycling pre-built deep learning models for extracting features
    • Deep Inception V3 for transfer learning
    • Other CNN architectures
      • AlexNet
      • Residual networks
      • HighwayNets and DenseNets
      • Xception
    • Style transfer
      • Content distance
      • Style distance
    • Summary
    • References
  • Chapter 4: Word Embeddings
    • Word embedding ‒ origins and fundamentals
    • Distributed representations
    • Static embeddings
      • Word2Vec
      • GloVe
    • Creating your own embeddings using Gensim
    • Exploring the embedding space with Gensim
    • Using word embeddings for spam detection
      • Getting the data
      • Making the data ready for use
      • Building the embedding matrix
      • Defining the spam classifier
      • Training and evaluating the model
      • Running the spam detector
    • Neural embeddings – not just for words
      • Item2Vec
      • node2vec
    • Character and subword embeddings
    • Dynamic embeddings
    • Sentence and paragraph embeddings
    • Language model-based embeddings
      • Using BERT as a feature extractor
    • Summary
    • References
  • Chapter 5: Recurrent Neural Networks
    • The basic RNN cell
      • Backpropagation through time (BPTT)
      • Vanishing and exploding gradients
    • RNN cell variants
      • Long short-term memory (LSTM)
      • Gated recurrent unit (GRU)
      • Peephole LSTM
    • RNN variants
      • Bidirectional RNNs
      • Stateful RNNs
    • RNN topologies
      • Example ‒ One-to-many – Learning to generate text
      • Example ‒ Many-to-one – Sentiment analysis
      • Example ‒ Many-to-many – POS tagging
    • Encoder-decoder architecture – seq2seq
      • Example ‒ seq2seq without attention for machine translation
    • Attention mechanism
      • Example ‒ seq2seq with attention for machine translation
    • Summary
    • References
  • Chapter 6: Transformers
    • Architecture
      • Key intuitions
        • Positional encoding
        • Attention
        • Self-attention
        • Multi-head (self-)attention
      • How to compute attention
      • Encoder-decoder architecture
      • Residual and normalization layers
      • An overview of the transformer architecture
      • Training
    • Transformers’ architectures
      • Categories of transformers
        • Decoder or autoregressive
        • Encoder or autoencoding
        • Seq2seq
        • Multimodal
        • Retrieval
      • Attention
        • Full versus sparse
        • LSH attention
        • Local attention
    • Pretraining
      • Encoder pretraining
      • Decoder pretraining
      • Encoder-decoder pretraining
      • A taxonomy for pretraining tasks
    • An overview of popular and well-known models
      • BERT
      • GPT-2
      • GPT-3
      • Reformer
      • BigBird
      • Transformer-XL
      • XLNet
      • RoBERTa
      • ALBERT
      • StructBERT
      • T5 and MUM
      • ELECTRA
      • DeBERTa
      • The Evolved Transformer and MEENA
      • LaMDA
      • Switch Transformer
      • RETRO
      • Pathways and PaLM
    • Implementation
      • Transformer reference implementation: An example of translation
      • Hugging Face
        • Generating text
        • Autoselecting a model and autotokenization
        • Named entity recognition
        • Summarization
        • Fine-tuning
      • TFHub
    • Evaluation
      • Quality
        • GLUE
        • SuperGLUE
        • SQuAD
        • RACE
        • NLP-progress
      • Size
        • Larger doesn’t always mean better
      • Cost of serving
    • Optimization
      • Quantization
      • Weight pruning
      • Distillation
    • Common pitfalls: dos and don’ts
      • Dos
      • Don’ts
    • The future of transformers
    • Summary
  • Chapter 7: Unsupervised Learning
    • Principal component analysis
      • PCA on the MNIST dataset
      • TensorFlow Embedding API
    • K-means clustering
      • K-means in TensorFlow
      • Variations in k-means
    • Self-organizing maps
      • Colour mapping using a SOM
    • Restricted Boltzmann machines
      • Reconstructing images using an RBM
      • Deep belief networks
    • Summary
    • References
  • Chapter 8: Autoencoders
    • Introduction to autoencoders
    • Vanilla autoencoders
      • TensorFlow Keras layers ‒ defining custom layers
      • Reconstructing handwritten digits using an autoencoder
    • Sparse autoencoder
    • Denoising autoencoders
      • Clearing images using a denoising autoencoder
    • Stacked autoencoder
      • Convolutional autoencoder for removing noise from images
      • A TensorFlow Keras autoencoder example ‒ sentence vectors
    • Variational autoencoders
    • Summary
    • References
  • Chapter 9: Generative Models
    • What is a GAN?
      • MNIST using GAN in TensorFlow
    • Deep convolutional GAN (DCGAN)
      • DCGAN for MNIST digits
    • Some interesting GAN architectures
      • SRGAN
      • CycleGAN
      • InfoGAN
    • Cool applications of GANs
    • CycleGAN in TensorFlow
    • Flow-based models for data generation
    • Diffusion models for data generation
    • Summary
    • References
  • Chapter 10: Self-Supervised Learning
    • Previous work
    • Self-supervised learning
    • Self-prediction
      • Autoregressive generation
        • PixelRNN
        • Image GPT (IPT)
        • GPT-3
        • XLNet
        • WaveNet
        • WaveRNN
      • Masked generation
        • BERT
        • Stacked denoising autoencoder
        • Context autoencoder
        • Colorization
      • Innate relationship prediction
        • Relative position
        • Solving jigsaw puzzles
        • Rotation
      • Hybrid self-prediction
        • VQ-VAE
        • Jukebox
        • DALL-E
        • VQ-GAN
    • Contrastive learning
      • Training objectives
        • Contrastive loss
        • Triplet loss
        • N-pair loss
        • Lifted structural loss
        • NCE loss
        • InfoNCE loss
        • Soft nearest neighbors loss
      • Instance transformation
        • SimCLR
        • Barlow Twins
        • BYOL
        • Feature clustering
        • DeepCluster
        • SwAV
        • InterCLR
      • Multiview coding
        • AMDIM
        • CMC
      • Multimodal models
        • CLIP
        • CodeSearchNet
        • Data2Vec
    • Pretext tasks
    • Summary
    • References
  • Chapter 11: Reinforcement Learning
    • An introduction to RL
      • RL lingo
      • Deep reinforcement learning algorithms
        • How does the agent choose its actions, especially when untrained?
        • How does the agent maintain a balance between exploration and exploitation?
        • How to deal with the highly correlated input state space
        • How to deal with the problem of moving targets
      • Reinforcement success in recent years
    • Simulation environments for RL
    • An introduction to OpenAI Gym
      • Random agent playing Breakout
      • Wrappers in Gym
    • Deep Q-networks
      • DQN for CartPole
      • DQN to play a game of Atari
      • DQN variants
        • Double DQN
        • Dueling DQN
        • Rainbow
    • Deep deterministic policy gradient
    • Summary
    • References
  • Chapter 12: Probabilistic TensorFlow
    • TensorFlow Probability
    • TensorFlow Probability distributions
      • Using TFP distributions
        • Coin Flip Example
        • Normal distribution
      • Bayesian networks
      • Handling uncertainty in predictions using TensorFlow Probability
        • Aleatory uncertainty
        • Epistemic uncertainty
        • Creating a synthetic dataset
        • Building a regression model using TensorFlow
        • Probabilistic neural networks for aleatory uncertainty
        • Accounting for the epistemic uncertainty
    • Summary
    • References
  • Chapter 13: An Introduction to AutoML
    • What is AutoML?
    • Achieving AutoML
    • Automatic data preparation
    • Automatic feature engineering
    • Automatic model generation
    • AutoKeras
    • Google Cloud AutoML and Vertex AI
      • Using the Google Cloud AutoML Tables solution
      • Using the Google Cloud AutoML Text solution
      • Using the Google Cloud AutoML Video solution
      • Cost
    • Summary
    • References
  • Chapter 14: The Math Behind Deep Learning
    • History
    • Some mathematical tools
      • Vectors
      • Derivatives and gradients everywhere
      • Gradient descent
      • Chain rule
      • A few differentiation rules
      • Matrix operations
    • Activation functions
      • Derivative of the sigmoid
      • Derivative of tanh
      • Derivative of ReLU
    • Backpropagation
      • Forward step
      • Backstep
        • Case 1: From hidden layer to output layer
        • Case 2: From hidden layer to hidden layer
      • Cross entropy and its derivative
      • Batch gradient descent, stochastic gradient descent, and mini-batch
        • Batch gradient descent
        • Stochastic gradient descent
        • Mini-batch gradient descent
      • Thinking about backpropagation and ConvNets
      • Thinking about backpropagation and RNNs
    • A note on TensorFlow and automatic differentiation
    • Summary
    • References
  • Chapter 15: Tensor Processing Unit
    • C/G/T processing units
      • CPUs and GPUs
      • TPUs
    • Four generations of TPUs, plus Edge TPU
      • First generation TPU
      • Second generation TPU
      • Third generation TPU
      • Fourth generation TPUs
      • Edge TPU
    • TPU performance
    • How to use TPUs with Colab
      • Checking whether TPUs are available
      • Keras MNIST TPU end-to-end training
    • Using pretrained TPU models
    • Summary
    • References
  • Chapter 16: Other Useful Deep Learning Libraries
    • Hugging Face
    • OpenAI
      • OpenAI GPT-3 API
      • OpenAI DALL-E 2
      • OpenAI Codex
    • PyTorch
    • ONNX
    • H2O.ai
      • H2O AutoML
      • AutoML using H2O
      • H2O model explainability
        • Partial dependence plots
        • Variable importance heatmap
        • Model correlation
    • Summary
  • Chapter 17: Graph Neural Networks
    • Graph basics
    • Graph machine learning
    • Graph convolutions – the intuition behind GNNs
    • Common graph layers
      • Graph convolution network
      • Graph attention network
      • GraphSAGE (sample and aggregate)
      • Graph isomorphism network
    • Common graph applications
      • Node classification
      • Graph classification
      • Link prediction
    • Graph customizations
      • Custom layers and message passing
      • Custom graph dataset
        • Single graphs in datasets
        • Set of multiple graphs in datasets
    • Future directions
      • Heterogeneous graphs
      • Temporal Graphs
    • Summary
    • References
  • Chapter 18: Machine Learning Best Practices
    • The need for best practices
    • Data best practices
      • Feature selection
      • Features and data
        • Augmenting textual data
    • Model best practices
      • Baseline models
      • Pretrained models, model APIs, and AutoML
      • Model evaluation and validation
      • Model improvements
    • Summary
    • References
  • Chapter 19: TensorFlow 2 Ecosystem
    • TensorFlow Hub
      • Using pretrained models for inference
    • TensorFlow Datasets
      • Load a TFDS dataset
      • Building data pipelines using TFDS
    • TensorFlow Lite
      • Quantization
      • FlatBuffers
      • Mobile converter
      • Mobile optimized interpreter
      • Supported platforms
      • Architecture
      • Using TensorFlow Lite
      • A generic example of an application
      • Using GPUs and accelerators
      • An example of an application
    • Pretrained models in TensorFlow Lite
      • Image classification
      • Object detection
      • Pose estimation
      • Smart reply
      • Segmentation
      • Style transfer
      • Text classification
      • Large language models
      • A note about using mobile GPUs
    • An overview of federated learning at the edge
      • TensorFlow FL APIs
    • TensorFlow.js
      • Vanilla TensorFlow.js
      • Converting models
      • Pretrained models
      • Node.js
    • Summary
    • References
  • Chapter 20: Advanced Convolutional Neural Networks
    • Composing CNNs for complex tasks
      • Classification and localization
      • Semantic segmentation
      • Object detection
      • Instance segmentation
      • Keras Applications
      • TensorFlow Hub
    • Answering questions about images (visual Q&A)
    • Creating a DeepDream network
    • Inspecting what a network has learned
    • Video
      • Classifying videos with pretrained nets in six different ways
    • Text documents
      • Using a CNN for sentiment analysis
    • Audio and music
      • Dilated ConvNets, WaveNet, and NSynth
    • A summary of convolution operations
      • Basic CNNs
      • Dilated convolution
      • Transposed convolution
      • Separable convolution
      • Depthwise convolution
      • Depthwise separable convolution
    • Capsule networks
      • What is the problem with CNNs?
      • What is new with capsule networks?
    • Summary
    • References
  • Other Books You May Enjoy
  • Index

Usage statistics

stat Access count: 0
Last 30 days: 0
Detailed usage statistics