Электронная библиотека Финансового университета

     

Детальная информация

Klaas, Jannes. Machine learning for finance: the practical guide to using data-driven algorithms in banking, insurance, and investments / Jannes Klaas. — Birmingham: Packt Publishing, Limited, 2019. — 1 online resource (457 pages). — (Expert insight). — <URL:http://elib.fa.ru/ebsco/2149485.pdf>.

Дата создания записи: 22.06.2019

Тематика: Finance — Data processing.; Finance — Mathematical models.; Machine learning.; Finances — Informatique.; Finances — Modèles mathématiques.; Apprentissage automatique.; Finance — Data processing.; Finance — Mathematical models.; Machine learning.

Коллекции: EBSCO

Разрешенные действия:

Действие 'Прочитать' будет доступно, если вы выполните вход в систему или будете работать с сайтом на компьютере в другой сети Действие 'Загрузить' будет доступно, если вы выполните вход в систему или будете работать с сайтом на компьютере в другой сети

Группа: Анонимные пользователи

Сеть: Интернет

Аннотация

Machine Learning for Finance shows you how to build machine learning models for use in financial services organizations. It shows you how to work with all the key machine learning models, from simple regression to advanced neural networks. You will use machine learning to automate manual tasks, address systematic bias, and find new insights ...

Права на использование объекта хранения

Место доступа Группа пользователей Действие
Локальная сеть Финуниверситета Все Прочитать Печать Загрузить
Интернет Читатели Прочитать Печать
-> Интернет Анонимные пользователи

Оглавление

  • Cover
  • Copyright
  • Mapt upsell
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Neural Networks and Gradient-Based Optimization
    • Our journey in this book
    • What is machine learning?
    • Supervised learning
    • Unsupervised learning
    • Reinforcement learning
      • The unreasonable effectiveness of data
      • All models are wrong
    • Setting up your workspace
    • Using Kaggle kernels
      • Running notebooks locally
        • Installing TensorFlow
        • Installing Keras
        • Using data locally
    • Using the AWS deep learning AMI
    • Approximating functions
    • A forward pass
    • A logistic regressor
      • Python version of our logistic regressor
    • Optimizing model parameters
    • Measuring model loss
      • Gradient descent
      • Backpropagation
      • Parameter updates
      • Putting it all together
    • A deeper network
    • A brief introduction to Keras
      • Importing Keras
      • A two-layer model in Keras
        • Stacking layers
        • Compiling the model
        • Training the model
      • Keras and TensorFlow
    • Tensors and the computational graph
    • Exercises
    • Summary
  • Chapter 2: Applying Machine Learning to Structured Data
    • The data
    • Heuristic, feature-based, and E2E models
    • The machine learning software stack
    • The heuristic approach
      • Making predictions using the heuristic model
      • The F1 score
      • Evaluating with a confusion matrix
    • The feature engineering approach
      • A feature from intuition – fraudsters don't sleep
      • Expert insight – transfer, then cash out
      • Statistical quirks – errors in balances
    • Preparing the data for the Keras library
      • One-hot encoding
      • Entity embeddings
        • Tokenizing categories
        • Creating input models
        • Training the model
    • Creating predictive models with Keras
      • Extracting the target
      • Creating a test set
      • Creating a validation set
      • Oversampling the training data
      • Building the model
        • Creating a simple baseline
        • Building more complex models
    • A brief primer on tree-based methods
      • A simple decision tree
      • A random forest
      • XGBoost
    • E2E modeling
    • Exercises
    • Summary
  • Chapter 3: Utilizing Computer Vision
    • Convolutional Neural Networks
      • Filters on MNIST
      • Adding a second filter
    • Filters on color images
    • The building blocks of ConvNets in Keras
      • Conv2D
        • Kernel size
        • Stride size
        • Padding
        • Input shape
        • Simplified Conv2D notation
        • ReLU activation
      • MaxPooling2D
      • Flatten
      • Dense
      • Training MNIST
        • The model
        • Loading the data
        • Compiling and training
    • More bells and whistles for our neural network
      • Momentum
      • The Adam optimizer
      • Regularization
        • L2 regularization
        • L1 regularization
        • Regularization in Keras
      • Dropout
      • Batchnorm
    • Working with big image datasets
    • Working with pretrained models
      • Modifying VGG-16
      • Random image augmentation
        • Augmentation with ImageDataGenerator
    • The modularity tradeoff
    • Computer vision beyond classification
      • Facial recognition
      • Bounding box prediction
    • Exercises
    • Summary
  • Chapter 4: Understanding Time Series
    • Visualization and preparation in pandas
      • Aggregate global feature statistics
      • Examining the sample time series
      • Different kinds of stationarity
      • Why stationarity matters
      • Making a time series stationary
      • When to ignore stationarity issues
    • Fast Fourier transformations
    • Autocorrelation
    • Establishing a training and testing regime
    • A note on backtesting
    • Median forecasting
    • ARIMA
    • Kalman filters
    • Forecasting with neural networks
      • Data preparation
        • Weekdays
    • Conv1D
    • Dilated and causal convolution
    • Simple RNN
    • LSTM
      • The carry
    • Recurrent dropout
    • Bayesian deep learning
    • Exercises
    • Summary
  • Chapter 5: Parsing Textual Data with Natural Language Processing
    • An introductory guide to spaCy
    • Named entity recognition
      • Fine-tuning the NER
    • Part-of-speech (POS) tagging
    • Rule-based matching
      • Adding custom functions to matchers
      • Adding the matcher to the pipeline
      • Combining rule-based and learning-based systems
    • Regular expressions
      • Using Python's regex module
      • Regex in pandas
      • When to use regexes and when not to
    • A text classification task
    • Preparing the data
      • Sanitizing characters
      • Lemmatization
      • Preparing the target
      • Preparing the training and test sets
    • Bag-of-words
      • TF-IDF
    • Topic modeling
    • Word embeddings
      • Preprocessing for training with word vectors
      • Loading pretrained word vectors
      • Time series models with word vectors
    • Document similarity with word embeddings
    • A quick tour of the Keras functional API
    • Attention
    • Seq2seq models
      • Seq2seq architecture overview
      • The data
      • Encoding characters
      • Creating inference models
      • Making translations
    • Exercises
    • Summary
  • Chapter 6: Using Generative Models
    • Understanding autoencoders
      • Autoencoder for MNIST
      • Autoencoder for credit cards
    • Visualizing latent spaces with t-SNE
    • Variational autoencoders
      • MNIST example
      • Using the Lambda layer
      • Kullback–Leibler divergence
      • Creating a custom loss
      • Using a VAE to generate data
      • VAEs for an end-to-end fraud detection system
    • VAEs for time series
    • GANs
      • A MNIST GAN
      • Understanding GAN latent vectors
      • GAN training tricks
    • Using less data – active learning
      • Using labeling budgets efficiently
      • Leveraging machines for human labeling
      • Pseudo labeling for unlabeled data
      • Using generative models
    • SGANs for fraud detection
    • Exercises
    • Summary
  • Chapter 7: Reinforcement Learning for Financial Markets
    • Catch – a quick guide to reinforcement learning
      • Q-learning turns RL into supervised learning
      • Defining the Q-learning model
      • Training to play Catch
    • Markov processes and the bellman equation – A more formal introduction to RL
      • The Bellman equation in economics
    • Advantage actor-critic models
      • Learning to balance
      • Learning to trade
    • Evolutionary strategies and genetic algorithms
    • Practical tips for RL engineering
      • Designing good reward functions
        • Careful, manual reward shaping
        • Inverse reinforcement learning
        • Learning from human preferences
      • Robust RL
    • Frontiers of RL
      • Multi-agent RL
      • Learning how to learn
      • Understanding the brain through RL
    • Exercises
    • Summary
  • Chapter 8: Privacy, Debugging, and Launching Your Products
    • Debugging data
      • How to find out whether your data is up to the task
      • What to do if you don't have enough data
      • Unit testing data
      • Keeping data private and complying with regulations
      • Preparing the data for training
      • Understanding which inputs led to which predictions
    • Debugging your model
      • Hyperparameter search with Hyperas
      • Efficient learning rate search
      • Learning rate scheduling
      • Monitoring training with TensorBoard
      • Exploding and vanishing gradients
    • Deployment
      • Launching fast
      • Understanding and monitoring metrics
      • Understanding where your data comes from
    • Performance tips
      • Using the right hardware for your problem
      • Making use of distributed training with TF estimators
      • Using optimized layers such as CuDNNLSTM
      • Optimizing your pipeline
      • Speeding up your code with Cython
      • Caching frequent requests
    • Exercises
    • Summary
  • Chapter 9: Fighting Bias
    • Sources of unfairness in machine learning
    • Legal perspectives
    • Observational fairness
    • Training to be fair
    • Causal learning
      • Obtaining causal models
      • Instrument variables
      • Non-linear causal models
    • Interpreting models to ensure fairness
    • Unfairness as complex system failure
      • Complex systems are intrinsically hazardous systems
        • Catastrophes are caused by multiple failures
        • Complex systems run in degraded mode
        • Human operators both cause and prevent accidents
        • Accident-free operation requires experience with failure
    • A checklist for developing fair models
      • What is the goal of the model developers?
        • Is the data biased?
        • Are errors biased?
        • How is feedback incorporated?
        • Can the model be interpreted?
        • What happens to models after deployment?
    • Exercises
    • Summary
  • Chapter 10: Bayesian Inference and Probabilistic Programming
    • An intuitive guide to Bayesian inference
      • Flat prior
      • <50% prior
      • Prior and posterior
      • Markov Chain Monte Carlo
      • Metropolis-Hastings MCMC
      • From probabilistic programming to deep probabilistic programming
    • Summary
    • Farewell
    • Further reading
      • General data analysis
      • Sound science in machine learning
      • General machine learning
      • General deep learning
      • Reinforcement learning
      • Bayesian machine learning
  • Other Books You May Enjoy
  • Index

Статистика использования

stat Количество обращений: 0
За последние 30 дней: 0
Подробная статистика