- 2024-04-09
- ELSAYED ELISSAWI
- (0)
- Uncategorized
Pioneering AI: Navigating the 2025 AI Learning Journey
Are you eager to dive into the world of Artificial Intelligence (AI) but unsure where to begin? Fear not!
This comprehensive guide will equip you with the essential knowledge and resources to navigate the ever-evolving AI landscape in 2025 and beyond.
Mathematics
Laying a solid mathematical foundation is crucial for understanding the core concepts of AI. Focus on mastering the following areas:
- Linear Algebra
- Calculus
- Probability and Statistics
- Optimization Techniques
Tools: Python
Python is the go-to programming language for AI and Machine Learning. Familiarize yourself with Python’s syntax, data structures, and libraries like NumPy, Pandas, and Matplotlib.
Beginners start here: Practical Python Programming.
If you’re already comfortable with Python, do this Advanced Python Mastery
They’re both great courses by David Beazley, author of Python Cookbook.
After that, watch some of James Powell’s talks
Read Python Design Patterns.
Supplementary
- Book: Fluent Python, 2nd Edition (code)
- Podcasts: Real Python & Talk Python
PyTorch
PyTorch is a powerful open-source machine learning library used for building and deploying AI models. Learn how to use PyTorch for building neural networks and deploying models.
Watch PyTorch Tutorials by Aladdin Persson
The PyTorch website is a great place to be.
Test your knowledge with some puzzles
Supplementary
Machine Learning: Write from Scratch
Implement machine learning algorithms from scratch to gain a deeper understanding of how they work. Start with simple algorithms like linear regression and logistic regression, then move on to more advanced techniques like decision trees and random forests.
Read the 100-page ML book.
Write from Scratch
While you’re reading, write the algorithms from scratch.
Look at the repositories below
If you want a challenge, write PyTorch from scratch by following this course.
Compete
Participate in machine learning competitions on platforms like Kaggle to test your skills and learn from the community. Kaggle offers a wide range of datasets and challenges to help you hone your skills.
Apply what you learn in competitions.
- Join ML competitions on platforms like bitgrit and Kaggle; find more in this article.
- Look at past winning solutions and study them
Do side projects
Work on side projects that interest you and apply machine learning techniques to solve real-world problems. This will help you gain practical experience and build a portfolio of projects to showcase your skills.
Read Getting machine learning to production by Vicki Boykis
She also wrote about what she learned building Viberary, a semantic search for books.
Get a dataset and build a model (i.e., use earthaccess to get NASA Earth data).
Create a UI with streamlit and share it on Twitter.
Deploy them
Learn how to deploy your machine learning models in production using tools like Flask, Django, or FastAPI. This will give you a better understanding of the entire machine learning lifecycle.
Get the models in production. Track your experiments. Learn how to monitor models. Experience data and model drift firsthand.
Here are some excellent resources
- Made With ML
- DataTalksClub/mlops-zoomcamp: Free MLOps course
- chiphuyen/machine-learning-systems-design
- Evidently AI — ML system design: 300 case studies
- stas00/ml-engineering: Machine Learning Engineering Online Book
Supplementary
Read books like “Pattern Recognition and Machine Learning” by Christopher Bishop and “Machine Learning” by Tom Mitchell to deepen your understanding of machine learning concepts.
Deep Learning: Fast.ai
Fast.ai offers a practical and code-first approach to deep learning. Their courses are designed to help you build working deep learning models quickly, even if you don’t have a strong mathematical background.
- fast.ai (part1, part2) + W&B Study Group
Liked fast.ai? Check out Full Stack Deep Learning.
If you want a more comprehensive, traditional course, check out UNIGE 14×050 — Deep Learning by François Fleuret.
If you need to reach for theory at some point, these are great books.
- Dive into Deep Learning (has code examples in PyTorch, NumPy/MXNet, JAX, and TensorFlow)
- Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
- Neural networks and deep learning
- Understanding Deep Learning (with hands-on notebooks)
Read The Little Book of Deep Learning on your phone instead of scrolling Twitter.
Read these while your neural networks are converging.
Do more competitions
Continue participating in deep learning competitions on Kaggle and other platforms to stay sharp and learn from the community.
- PlantTraits2024 — FGVC11 | Kaggle (computer vision)
Implement papers
Implement cutting-edge deep learning papers from arXiv or published in conferences like CVPR, ICCV, ECCV, NIPS, ICML, and ICLR. This will help you stay up-to-date with the latest research and gain practical experience in implementing state-of-the-art models.
Check out labml.ai Annotated PyTorch Paper Implementations
Papers with Code is a great resource; here’s BERT explained on their website.
Below are some resources for the specializations within Deep Learning
Computer Vision
Learn about convolutional neural networks (CNNs) and their applications in computer vision tasks like image classification, object detection, and segmentation. Implement popular CNN architectures like AlexNet, VGGNet, ResNet, and Inception.
A lot of people recommend CS231n: Deep Learning for Computer Vision. It’s challenging but worth it if you get through it.
Reinforcement Learning
For RL, these two are great:
NLP
Learn about natural language processing (NLP) and how to apply deep learning techniques to tasks like text classification, sentiment analysis, named entity recognition, and machine translation. Implement models like RNNs, LSTMs, GRUs, and Transformers.
Another great Stanford course, CS 224N | Natural Language Processing with Deep Learning
Learn Hugging Face: Hugging Face NLP Course
Check out this Super Duper NLP Repo
Good articles and breakdowns
- BERT Research — Ep. 1 — Key Concepts & Sources · Chris McCormick
- The Illustrated Word2vec — Jay Alammar
- The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning
- Understanding LSTM Networks — colah’s blog
- PyTorch RNN from Scratch — Jake Tae
Supplementary
Large Language Models
First, watch [1hr Talk] Intro to Large Language Models by Andrej.
Then Large Language Models in Five Formulas, by Alexander Rush — Cornell Tech

Watch Neural Networks: Zero to Hero
Watch the “Neural Networks: Zero to Hero” video series by 3Blue1Brown to gain a visual and intuitive understanding of how neural networks work.
It starts with explaining and coding backpropagation from scratch and ends with writing GPT from scratch.
Neural Networks: Zero To Hero by Andrej Karpathy
He just released a new video → Let’s build the GPT Tokenizer
You can also look at GPT in 60 Lines of NumPy | Jay Mody while you’re at it.
Free LLM boot camp
Enroll in a free LLM boot camp like the one offered by Anthropic to learn about the fundamentals of large language models and how to build applications using them.
A paid LLM Bootcamp released for free by Full Stack Deep Learning.
It teaches prompt engineering, LLMOps, UX for LLMs, and how to launch an LLM app in an hour.
Now that you’re itching to build after this boot camp,
Build with LLMs
Build applications using pre-trained language models like GPT-3, BERT, and T5. Fine-tune these models on your own datasets to solve specific problems.
Want to build apps with LLMs?
Watch Application Development using Large Language Models
by Andrew Ng
Read Building LLM applications for production by Huyen Chip
As well as Patterns for Building LLM-based Systems & Products by Eugene Yan
Refer to the OpenAI Cookbook for recipes.
Use Vercel AI templates to get started.
Participate in hackathons
Participate in hackathons and challenges focused on building applications using large language models. This will help you learn from others and gain practical experience.
lablab.ai has new AI hackathons every week. Let me know if you want to team up!
If you want to go deeper into the theory and understand how everything works:
Read papers
Read research papers on large language models published in conferences like ACL, EMNLP, and NAACL. Focus on understanding the key ideas and how they can be applied in practice.
A great article by Sebastian Raschka on Understanding Large Language Models, where he lists some papers, you should read.
He also recently published another article with papers you should read in January 2024, covering mistral models.
Follow his substack Ahead of AI.
Write Transformers from scratch
Implement Transformer models from scratch using PyTorch or TensorFlow. This will give you a deeper understanding of how they work and how to customize them for your specific needs.
Read The Transformer Family Version 2.0 | Lil’Log for an overview.
Choose whichever format suits you best and implement it from scratch.
Paper
- Attention Is All You Need
- The Illustrated Transformer
- The Annotated Transformer by Harvard
- Thinking like Transformer
Some good blogs
Follow blogs like The Gradient, Lil’Log, and Distill.pub to stay up-to-date with the latest research and trends in AI and machine learning.
- Creating a Transformer From Scratch — Part One: The Attention Mechanism (part 2) (code)
- Understanding and Coding the Self-Attention Mechanism of Large Language Models From Scratch by Sebastian Raschka, PhD
- Transformers from scratch
Videos
- Coding a Transformer from scratch on PyTorch, with full explanation, training and inference
- NLP: Implementing BERT and Transformers from Scratch
You can code transformers from scratch now. But there’s still more.
Watch these Stanford CS25 — Transformers United videos.
Some good blogs
- Gradient Descent into Madness — Building an LLM from scratch
- The Illustrated Transformer — Jay Alammar
- Some Intuition on Attention and the Transformer by Eugene Yan
- Speeding up the GPT — KV cache | Becoming The Unbeatable
- Beyond Self-Attention: How a Small Language Model Predicts the Next Token
- Llama from scratch (or how to implement a paper without crying) | Brian Kitano
- Improving LoRA: Implementing Weight-Decomposed Low-Rank Adaptation (DoRA) from Scratch
Watch Umar Jamil
Watch Umar Jamil’s video series on YouTube to learn about advanced topics in deep learning and large language models.
Learn how to run open-source models
Learn how to run open-source models like GPT-2, GPT-3, BERT, and T5 using tools like Hugging Face Transformers. This will give you a better understanding of how these models work and how to use them in practice.
He has fantastic in-depth videos explaining papers. He also shows you the code.
- LoRA: Low-Rank Adaptation of Large Language Models — Explained visually + PyTorch code from scratch
- Mistral / Mixtral Explained: Sliding Window Attention, Sparse Mixture of Experts, Rolling Buffer
- Attention is all you need (Transformer) — Model explanation (including math), Inference and Training
- LLaMA explained: KV-Cache, Rotary Positional Embedding, RMS Norm, Grouped Query Attention, SwiGLU
- Retrieval Augmented Generation (RAG) Explained: Embedding, Sentence BERT, Vector Database (HNSW)
Some more links related to LLMs that are not exhaustive. Look at LLM Syllabus for a more comprehensive syllabus for LLMs.
Prompt Engineering
Learn about prompt engineering, which is the art of crafting effective prompts to get the desired output from large language models. This is a crucial skill for building applications using LLMs.
Read Prompt Engineering | Lil’Log
ChatGPT Prompt Engineering for Developers by Ise Fulford (OpenAI) and Andrew Ng
DeepLearning.ai also has other short courses you can enroll in for free.
Fine-tuning LLMs
Learn how to fine-tune large language models on your own datasets to solve specific problems. This involves training the model on your data and adapting it to your specific use case.
Read the Hugging Face fine-tuning guide.
A good guidebook: Fine-Tuning — The GenAI Guidebook
Check out axolotl.
This is a good article: Fine-tune a Mistral-7b model with Direct Preference Optimization | by Maxime Labonne
RAG
Learn about Retrieval Augmented Generation (RAG), which is a technique for combining large language models with information retrieval systems to improve their performance on specific tasks.
A great article by Anyscale: Building RAG-based LLM Applications for Production
A comprehensive overview of Retrieval Augmented Generation by Aman Chadha
Remember, learning AI is a journey, not a destination. Stay curious, experiment, and don’t be afraid to make mistakes. By following this guide and continuously learning, you’ll be well on your way to becoming an AI trailblazer in 2024 and beyond.

Hire the best advertising marketers we have endorsed.
Are you ready to take your marketing game to the next level? Join our team of elite marketers at ad.wrstrend and be part of something extraordinary! Join us today and let’s make marketing magic together! 🚀

Leave a Reply