Электронная библиотека Финансового университета

     

Детальная информация

De Marchi, Leonardo. Hands-on neural networks: learn how to build and train your first neural network model using Python / Leonardo De Marchi, Laura Mitchell. — 1 online resource — <URL:http://elib.fa.ru/ebsco/2148645.pdf>.

Дата создания записи: 04.06.2019

Тематика: Neural networks (Computer science); Python (Computer program language); Artificial intelligence.; Artificial intelligence.; Neural networks (Computer science); Python (Computer program language)

Коллекции: EBSCO

Разрешенные действия:

Действие 'Прочитать' будет доступно, если вы выполните вход в систему или будете работать с сайтом на компьютере в другой сети Действие 'Загрузить' будет доступно, если вы выполните вход в систему или будете работать с сайтом на компьютере в другой сети

Группа: Анонимные пользователи

Сеть: Интернет

Аннотация

This book will be a journey for beginners who want to step into the world of deep learning and artificial intelligence. It will thoughtfully take you through the training and implementation of various neural network architectures using the Python ecosystem. You will master each neural network architecture while understanding its working mechanism.

Права на использование объекта хранения

Место доступа Группа пользователей Действие
Локальная сеть Финуниверситета Все Прочитать Печать Загрузить
Интернет Читатели Прочитать Печать
-> Интернет Анонимные пользователи

Оглавление

  • Cover
  • Title Page
  • Copyright and Credits
  • Dedication
  • About Packt
  • Contributors
  • Table of Contents
  • Preface
  • Section 1: Getting Started
  • Chapter 1: Getting Started with Supervised Learning
    • History of AI
    • An overview of machine learning
      • Supervised learning
      • Unsupervised learning
      • Semi-supervised learning
      • Reinforcement learning
    • Environment setup
      • Understanding virtual environments 
      • Anaconda
      • Docker
    • Supervised learning in practice with Python
      • Data cleaning
    • Feature engineering
      • How deep learning performs feature engineering
        • Feature scaling
        • Feature engineering in Keras
    • Supervised learning algorithms
      • Metrics
        • Regression metrics
        • Classification metrics
      • Evaluating the model
        • TensorBoard
    • Summary
  • Chapter 2: Neural Network Fundamentals
    • The perceptron
      • Implementing a perceptron
    • Keras
      • Implementing perceptron in Keras
    • Feedforward neural networks
      • Introducing backpropagation
      • Activation functions
        • Sigmoid
          • Softmax
          • Tanh
      • ReLU
      • Keras implementation
        • The chain rule
        • The XOR problem
    • FFNN in Python from scratch 
      • FFNN Keras implementation
      • TensorBoard
      • TensorBoard on the XOR problem
    • Summary
  • Section 2: Deep Learning Applications
  • Chapter 3: Convolutional Neural Networks for Image Processing
    • Understanding CNNs
      • Input data
    • Convolutional layers
      • Pooling layers
        • Stride
        • Max pooling
        • Zero padding
      • Dropout layers
      • Normalization layers
      • Output layers
    • CNNs in Keras
      • Loading the data
      • Creating the model
      • Network configuration
    • Keras for expression recognition
    • Optimizing the network
    • Summary
  • Chapter 4: Exploiting Text Embedding
    • Machine learning for NLP
      • Rule-based methods
    • Understanding word embeddings
      • Applications of words embeddings
      • Word2vec
        • Word embedding in Keras
        • Pre-trained network
    • GloVe
      • Global matrix factorization
      • Using the GloVe model
      • Text classification with GloVe
    • Summary
  • Chapter 5: Working with RNNs
    • Understanding RNNs
      • Theory behind CNNs
        • Types of RNNs
          • One-to-one
          • One-to-many
          • Many-to-many
        • The same lag
        • A different lag
        • Loss functions
    • Long Short-Term Memory
      • LSTM architecture
    • LSTMs in Keras
      • PyTorch basics
      • Time series prediction
    • Summary
  • Chapter 6: Reusing Neural Networks with Transfer Learning
    • Transfer learning theory
      • Introducing multi-task learning
      • Reusing other networks as feature extractors
    • Implementing MTL
    • Feature extraction
    • Implementing TL in PyTorch
    • Summary
  • Section 3: Advanced Applications
  • Chapter 7: Working with Generative Algorithms
    • Discriminative versus generative algorithms
    • Understanding GANs
      • Training GANs
      • GAN challenges
    • GAN variations and timelines
      • Conditional GANs
      • DCGAN
        • ReLU versus Leaky ReLU
        • DCGAN – a coded example
      • Pix2Pix GAN
      • StackGAN
      • CycleGAN
      • ProGAN
      • StarGAN
        • StarGAN discriminator objectives
        • StarGAN generator functions
    • BigGAN
    • StyleGAN
      • Style modules
      • StyleGAN implementation
    • Deepfakes
    • RadialGAN
    • Summary
    • Further reading
  • Chapter 8: Implementing Autoencoders
    • Overview of autoencoders
    • Autoencoder applications
    • Bottleneck and loss functions
    • Standard types of autoencoder
      • Undercomplete autoencoders
        • Example
        • Visualizing with TensorBoard
        • Visualizing reconstructed images
      • Multilayer autoencoders
        • Example
      • Convolutional autoencoders
        • Example
      • Sparse autoencoders
        • Example
      • Denoising autoencoders
        • Example
      • Contractive autoencoder
    • Variational Autoencoders
    • Training VAEs
      • Example
    • Summary
    • Further reading
  • Chapter 9: Deep Belief Networks
    • Overview of DBNs
      • BBNs
        • Predictive propagation
        • Retrospective propagation
      • RBMs
        • RBM training
        • Example – RBM recommender system
        • Example – RBM recommender system using code
    • DBN architecture
    • Training DBNs
    • Fine-tuning
    • Datasets and libraries
      • Example – supervised DBN classification
      • Example – supervised DBN regression
      • Example – unsupervised DBN classification
    • Summary
    • Further reading
  • Chapter 10: Reinforcement Learning
    • Basic definitions
    • Introducing Q-learning
      • Learning objectives
      • Policy optimization
      • Methods of Q-learning
    • Playing with OpenAI Gym
    • The frozen lake problem
    • Summary
  • Chapter 11: Whats Next?
    • Summarizing the book
    • Future of machine learning
    • Artificial general intelligence
      • Ethics in AI
      • Interpretability
      • Automation
      • AI safety
      • AI ethics
      • Accountability
    • Conclusions
  • Other Books You May Enjoy
  • Index

Статистика использования

stat Количество обращений: 0
За последние 30 дней: 0
Подробная статистика