📝 Machine Learning Concepts

This page contains a table of contents for my notes and reviews on Machine Learning topics and basic concepts that would be great to have in your toolkit as you plan to train your own model.
Each section is divided by topic to help you get bite-sized information that is easy to understand!
📌 Table of Contents
- Loss Functions
log loss; hinge loss; polyloss; focal loss; GE2E; MAE; MSE; AAM
- Activation Functions
sigmoid function, hyperbolic tangent function, ReLU
- Metrics
precision; F1; AUC;
- Model Compression and Privacy
On device; distillation; federated learning; differential privacy
- Ensemble Models
Bootstrapping; Boosting; Bagging; Gradient Boosting; Random Forest
- Gradient accumulation; Gradient checkpointing
process and advantages of gradient accumulation and checkpointing
- Reinforcement Learning from Human Feedback
reward model; bias in human feedback; fine-tuning LLMs
- Debugging ML Models
data issues; underfitting; overfitting; HPO
- ML algorithms
classification algorithms; regression algorithms
- DL algorithms
CNN; RNN; LSTM; GRU; Transformers
- Graph Neural Networks
message passing; benefits of GNNs; loss functions
- Token Sampling
Greedy, Top-k, Top-p, Beam Search, Exhaustive Search, Temperature
- The 'how tos' of Machine Learning
Different techniques for common problems
- Large Language Model Ops
best practices, security, ethics
📸 Credits
The in-line diagrams are taken from resources mentioned in the reference section within each topic.
📝 Citation
If you found our work useful, please cite it as:
  author        = {Jain, Vinija and Chadha, Aman},
  title         = {Distilled Notes for the Machine Learning},
  howpublished  = {\url{https://www.vinija.ai}},
  year          = {2022},
  note          = {Accessed: 2022-07-01},
  url           = {www.vinija.ai}

A. Chadha, V. Jain, Distilled Notes for Machine Learning , https://www.vinija.ai, 2022, Accessed: July 1 2022.