DeepFindr
DeepFindr
  • Видео 57
  • Просмотров 1 631 756
Uniform Manifold Approximation and Projection (UMAP) | Dimensionality Reduction Techniques (5/5)
▬▬ Papers / Resources ▬▬▬
Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing
Sources:
- TDA Introduction: www.frontiersin.org/articles/10.3389/frai.2021.667963/full
- TDA Blogpost: chance.amstat.org/2021/04/topological-data-analysis/
- TDA Applications Blogpost: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
- TDA Intro Paper: arxiv.org/pdf/2006.03173.pdf
- Mathematical UMAP Blogpost: topos.site/blog/2024-04-05-understanding-umap/
- UMAP Author Talk: ruclips.net/video/nq6iPZVUxZU/видео.html&ab_channel=Enthought
- UMAP vs. t-SNE Global preservation paper: dkobak.github.io/pdfs/kobak2021initi...
Просмотров: 3 293

Видео

t-distributed Stochastic Neighbor Embedding (t-SNE) | Dimensionality Reduction Techniques (4/5)
Просмотров 5 тыс.6 месяцев назад
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepFindr. The first 200 of you will get 20% off Brilliant’s annual premium subscription. (Video sponsered by Brilliant.org) ▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Entropy: gregorygundersen.com/blog/2020/09/01/gaussian-entropy/ At...
Multidimensional Scaling (MDS) | Dimensionality Reduction Techniques (3/5)
Просмотров 4,6 тыс.7 месяцев назад
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepFindr . The first 200 of you will get 20% off Brilliant’s annual premium subscription ▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Kruskal Paper 1964: cda.psych.uiuc.edu/psychometrika_highly_cited_articles/kruskal_1964a.pdf Very old...
Principal Component Analysis (PCA) | Dimensionality Reduction Techniques (2/5)
Просмотров 5 тыс.9 месяцев назад
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Peter Bloem PCA Blog: peterbloem.nl/blog/pca PCA for DS book: pca4ds.github.io/basic.html PCA Book: cda.psych.uiuc.edu/statistical_learning_course/Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)_MVsa_.pdf Lagrange Multipliers: ekamperi.github.io/mathemati...
Dimensionality Reduction Techniques | Introduction and Manifold Learning (1/5)
Просмотров 12 тыс.9 месяцев назад
Brilliant 20% off: brilliant.org/DeepFindr/ ▬▬ Papers / Resources ▬▬▬ Intro to Dim. Reduction Paper: drops.dagstuhl.de/opus/volltexte/2012/3747/pdf/12.pdf T-SNE Visualization Video: ruclips.net/video/wvsE8jm1GzE/видео.html&ab_channel=GoogleforDevelopers On the Surprising Behavior of Distance Metrics in High Dimensional Space: link.springer.com/chapter/10.1007/3-540-44503-X_27 On the Intrinsic D...
LoRA explained (and a bit about precision and quantization)
Просмотров 57 тыс.Год назад
▬▬ Papers / Resources ▬▬▬ LoRA Paper: arxiv.org/abs/2106.09685 QLoRA Paper: arxiv.org/abs/2305.14314 Huggingface 8bit intro: huggingface.co/blog/hf-bitsandbytes-integration PEFT / LoRA Tutorial: www.philschmid.de/fine-tune-flan-t5-peft Adapter Layers: arxiv.org/pdf/1902.00751.pdf Prefix Tuning: arxiv.org/abs/2101.00190 ▬▬ Support me if you like 🌟 ►Link to this channel: bit.ly/3zEqL1W ►Support m...
Vision Transformer Quick Guide - Theory and Code in (almost) 15 min
Просмотров 74 тыс.Год назад
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1P9TPRWsDdqJC6IvOxjG2_3QlgCt59P0w?usp=sharing ViT paper: arxiv.org/abs/2010.11929 Best Transformer intro: jalammar.github.io/illustrated-transformer/ CNNs vs ViT: arxiv.org/abs/2108.08810 CNNs vs ViT Blog: towardsdatascience.com/do-vision-transformers-see-like-convolutional-neural-networks-paper-explained-91b4bd5185c8 Swi...
Personalized Image Generation (using Dreambooth) explained!
Просмотров 8 тыс.Год назад
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1QUjLK6oUB_F4FsIDYusaHx-Yl7mL-Lae?usp=sharing Stable Diffusion Tutorial: jalammar.github.io/illustrated-stable-diffusion/ Stable Diffusion Paper: arxiv.org/abs/2112.10752 Hypernet Blogpost: blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac Dreambooth Paper: arxiv.org/abs/2208.12242 LoRa Paper: arxiv.o...
Equivariant Neural Networks | Part 3/3 - Transformers and GNNs
Просмотров 6 тыс.Год назад
▬▬ Papers / Resources ▬▬▬ SchNet: arxiv.org/abs/1706.08566 SE(3) Transformer: arxiv.org/abs/2006.10503 Tensor Field Network: arxiv.org/abs/1802.08219 Spherical Harmonics RUclips Video: ruclips.net/video/EcKgJhFdtEY/видео.html&ab_channel=BJBodner Spherical Harmonics Formula: ruclips.net/video/5PMqf3Hj-Aw/видео.html&ab_channel=ProfessorMdoesScience Tensor Field Network Jupyter Notebook: github.co...
Equivariant Neural Networks | Part 2/3 - Generalized CNNs
Просмотров 5 тыс.Год назад
▬▬ Papers / Resources ▬▬▬ Group Equivariant CNNs: arxiv.org/abs/1602.07576 Convolution 3B1B video: ruclips.net/video/KuXjwB4LzSA/видео.html&ab_channel=3Blue1Brown Fabian Fuchs Equivariance: fabianfuchsml.github.io/equivariance1of2/ Steerable CNNs: arxiv.org/abs/1612.08498 Blogpost GCNN: medium.com/swlh/geometric-deep-learning-group-equivariant-convolutional-networks-ec687c7a7b41 Roto-Translatio...
Equivariant Neural Networks | Part 1/3 - Introduction
Просмотров 11 тыс.Год назад
▬▬ Papers / Resources ▬▬▬ Fabian Fuchs Equivariance: fabianfuchsml.github.io/equivariance1of2/ Deep Learning for Molecules: dmol.pub/dl/Equivariant.html Naturally Occuring Equivariance: distill.pub/2020/circuits/equivariance/ 3Blue1Brown Group Theory: ruclips.net/video/mH0oCDa74tE/видео.html&ab_channel=3Blue1Brown Group Equivariant CNNs: arxiv.org/abs/1602.07576 Equivariance vs Data Augmentatio...
State of AI 2022 - My Highlights
Просмотров 2,8 тыс.Год назад
▬▬ Sources ▬▬▬▬▬▬▬ - State of AI Report 2022: www.stateof.ai/ ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬ All Icons are from flaticon: www.flaticon.com/authors/freepik ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬ Music from Uppbeat (free for Creators!): uppbeat.io/t/sensho/forgiveness License code: AG34GTPX2CW8CTHS ▬▬ Used Videos ▬▬▬▬▬▬▬▬▬▬▬ Byron Bhxr: www.pexels.com/de-de/video/wissenschaft-animation-dna-biochemie-11268031/ ▬▬ Ti...
Contrastive Learning in PyTorch - Part 2: CL on Point Clouds
Просмотров 17 тыс.Год назад
▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Colab Notebook: colab.research.google.com/drive/1oO-Raqge8oGXGNkZQOYTH-je4Xi1SFVI?usp=sharing - SimCLRv2: arxiv.org/pdf/2006.10029.pdf - PointNet: arxiv.org/pdf/1612.00593.pdf - PointNet : arxiv.org/pdf/1706.02413.pdf - EdgeConv: arxiv.org/pdf/1801.07829.pdf - Contrastive Learning Survey: arxiv.org/ftp/arxiv/papers/2010/2010.05113.pdf ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬ All Ico...
Contrastive Learning in PyTorch - Part 1: Introduction
Просмотров 31 тыс.Год назад
▬▬ Notes ▬▬▬▬▬▬▬▬▬▬▬ Two small things I realized when editing this video - SimCLR uses two separate augmented views as positive samples - Many frameworks have separate projection heads on the learned representations which transforms them additionally for the contrastive loss ▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Intro: sthalles.github.io/a-few-words-on-representation-learning/ - Survey: arxiv.org/ftp/arx...
Self-/Unsupervised GNN Training
Просмотров 18 тыс.2 года назад
▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Molecular Pre-Training Evaluation: arxiv.org/pdf/2207.06010.pdf - Latent Space Image: arxiv.org/pdf/2206.08005.pdf - Survey Xie et al.: arxiv.org/pdf/2102.10757.pdf - Survey Liu et al.: arxiv.org/pdf/2103.00111.pdf - Graph Autoencoder, Kipf/Welling: arxiv.org/pdf/1611.07308.pdf - GraphCL: arxiv.org/pdf/2010.13902.pdf - Deep Graph Infomax: arxiv.org/pdf/1809.10341.pdf...
Diffusion models from scratch in PyTorch
Просмотров 248 тыс.2 года назад
Diffusion models from scratch in PyTorch
Causality and (Graph) Neural Networks
Просмотров 17 тыс.2 года назад
Causality and (Graph) Neural Networks
How to get started with Data Science (Career tracks and advice)
Просмотров 1,6 тыс.2 года назад
How to get started with Data Science (Career tracks and advice)
Converting a Tabular Dataset to a Temporal Graph Dataset for GNNs
Просмотров 12 тыс.2 года назад
Converting a Tabular Dataset to a Temporal Graph Dataset for GNNs
Converting a Tabular Dataset to a Graph Dataset for GNNs
Просмотров 32 тыс.2 года назад
Converting a Tabular Dataset to a Graph Dataset for GNNs
How to handle Uncertainty in Deep Learning #2.2
Просмотров 3,2 тыс.2 года назад
How to handle Uncertainty in Deep Learning #2.2
How to handle Uncertainty in Deep Learning #2.1
Просмотров 6 тыс.2 года назад
How to handle Uncertainty in Deep Learning #2.1
How to handle Uncertainty in Deep Learning #1.2
Просмотров 3,9 тыс.2 года назад
How to handle Uncertainty in Deep Learning #1.2
How to handle Uncertainty in Deep Learning #1.1
Просмотров 11 тыс.2 года назад
How to handle Uncertainty in Deep Learning #1.1
Recommender Systems using Graph Neural Networks
Просмотров 22 тыс.2 года назад
Recommender Systems using Graph Neural Networks
Fake News Detection using Graphs with Pytorch Geometric
Просмотров 15 тыс.2 года назад
Fake News Detection using Graphs with Pytorch Geometric
Fraud Detection with Graph Neural Networks
Просмотров 27 тыс.2 года назад
Fraud Detection with Graph Neural Networks
Traffic Forecasting with Pytorch Geometric Temporal
Просмотров 24 тыс.2 года назад
Traffic Forecasting with Pytorch Geometric Temporal
Friendly Introduction to Temporal Graph Neural Networks (and some Traffic Forecasting)
Просмотров 29 тыс.2 года назад
Friendly Introduction to Temporal Graph Neural Networks (and some Traffic Forecasting)
Python Graph Neural Network Libraries (an Overview)
Просмотров 8 тыс.2 года назад
Python Graph Neural Network Libraries (an Overview)

Комментарии

  • @PaxonFrady
    @PaxonFrady 2 дня назад

    why would the attention adjacency matrix be symmetrical? If the weight vector is learnable, then it does matter which order the two input vectors are concatenated. It doesn't seem like there would be any reason to enforce symmetry.

  • @anastassiya8526
    @anastassiya8526 7 дней назад

    it was the best explanation that gave me hope for the understanding these mechanisms. Everything was so good explained and depicted, thank you!

  • @eransasson20
    @eransasson20 10 дней назад

    Thanks for this amazing presentation! This topic which is not trivial is also not easy to show in pictures and you succeeded perfectly. Great help!

  • @PostmetaArchitect
    @PostmetaArchitect 10 дней назад

    Ist almost as if its just a normal neural network but projected onto a graph

  • @English-bh1ng
    @English-bh1ng 15 дней назад

    Well-organized video and description, abundant references. I love this series. Cheer up!

  • @stevechesney9334
    @stevechesney9334 16 дней назад

    I really appreciate the information that you shared in this video/playlist. Do you have an example of where you used used the Heterogeneous graph data to create a GNN or GCN?

  • @lw4423
    @lw4423 17 дней назад

    mathematician reeee-man

  • @kenalexandremeridi
    @kenalexandremeridi 18 дней назад

    What is this for? I am intrigued ( im a musician)

  • @tobiaspucher9597
    @tobiaspucher9597 20 дней назад

    I have trouble finetuning the model?? has anyone managed to recreate the results from the paper?

  • @shubhamtalks9718
    @shubhamtalks9718 21 день назад

    Bro u killed it. Best explanation. Trust me I have watched all tutorials but all other explanations were shitty. Please create one video on quantization.

  • @scaredheart6109
    @scaredheart6109 21 день назад

    AMAZING!

  • @user-of2hd3bq4n
    @user-of2hd3bq4n 22 дня назад

    This might be the best and simple explanation of GAT one can ever find! Thanks man

  • @xxyyzz8464
    @xxyyzz8464 22 дня назад

    Why use dropout with GeLU? Didn’t the GeLU paper specifically say one motivation for GeLU was to replace ReLU+dropout with a single GeLU layer?

  • @mahathibodela
    @mahathibodela 23 дня назад

    its just awesomee... the way u do research before making a video is reallyyyy fascinating. can you tell how u collect papers which are relevant, it will help a lot while doing projects on our own

  • @amulya1284
    @amulya1284 24 дня назад

    you make the best explanation videos everrrr! is there one on how to train custom models using LORA?

  • @RishabNakarmi-rn
    @RishabNakarmi-rn 26 дней назад

    Did not get :( moving to other videos

  • @SickegalAlien
    @SickegalAlien 26 дней назад

    Banger vid from big dog as always🐶

  • @SickegalAlien
    @SickegalAlien 26 дней назад

    Collaborative Filtering is such an underrated computation method, imho Many real life problems can be re-interpteted as a recommendation problem. It's just a matter of perspective, but the results can be huge!

  • @newbie8051
    @newbie8051 26 дней назад

    Ah, tough to understand, guess will have to read more on this to fully understand

  • @nicksanders1438
    @nicksanders1438 28 дней назад

    It's a good, concise walk-through with good code implementation examples. However I'd recommend avoiding some ambiguous variable names in code like betas, Block, etc.

  • @pokhrel3794
    @pokhrel3794 28 дней назад

    The best explanation i found in internet

  • @snsacharya1737
    @snsacharya1737 Месяц назад

    A wonderful and succinct explanation with crisp visualisations about both the attention mechanism and the graph neural network. The way the learnable parameters are highlighted along with the intuition (such as a weighted adjacency matrix) and the corresponding matrix operations is very well done.

  • @redalligator291
    @redalligator291 Месяц назад

    Hello great video. I was just wondering about something you mentioned in the video. At time 14:19 you say that graph VAE's might not be the best architecture for graph generation, and I was wondering what other models do you recommend that might be better than GVAE's. I am asking this because I am working on the exact thing you are working on just for another disease, so I was wondering if you have any recommendations to make the process a lot simpler.

    • @DeepFindr
      @DeepFindr Месяц назад

      Hi, I would look into transformer models like Smiles Transformer or Diffusion models. In both cases the architecture is more suitable in my opinion

    • @redalligator291
      @redalligator291 Месяц назад

      @@DeepFindr Ok thank you so much. Do you know any good specific models in the Smiles Transformer or Diffusion that stand out to you as better than the rest. It would be really helpful if I knew what model I should be going off of.

  • @user-rd2gu3xg7e
    @user-rd2gu3xg7e Месяц назад

    perfect video

  • @rajeshve7211
    @rajeshve7211 Месяц назад

    Fantastic explanation. You made it look easy!

  • @ycombinator765
    @ycombinator765 Месяц назад

    bro is educated!

  • @vimukthisadithya6239
    @vimukthisadithya6239 Месяц назад

    This is a perfect explanation for LoRA I found so far !!!

  • @user-wg8rh7oh4b
    @user-wg8rh7oh4b Месяц назад

    What's the similarity between NTXentLoss (or InfoNCE) and SimCLR loss?

  • @paahunikhandelwal8974
    @paahunikhandelwal8974 Месяц назад

    Can I predict next 12 timesteps but more than one target feature with this model?

  • @abhirampemmaraju6339
    @abhirampemmaraju6339 Месяц назад

    Very good explanation

  • @DemianUsul
    @DemianUsul Месяц назад

    This video is pure gold. As someone coming from linguistics and trying to use MDS in research, I really appreciate it!

  • @user-kc7gm9ty5o
    @user-kc7gm9ty5o Месяц назад

    1:00

  • @kvnptl4400
    @kvnptl4400 Месяц назад

    Highly appreciate the effort. I really like how you started with the theoretical part and then worked on a real dataset. Thanks!

  • @LuoxiangPan
    @LuoxiangPan Месяц назад

    The best GNN explanation video also answered many questions of mine

  • @Saed7630
    @Saed7630 Месяц назад

    Bravo!

  • @poketopa1234
    @poketopa1234 2 месяца назад

    I think the lora is scaled by the square root of the rank, not the rank.

  • @nicolasf1219
    @nicolasf1219 2 месяца назад

    Would this also work on large graphs?

  • @vijayalaxmiise1504
    @vijayalaxmiise1504 2 месяца назад

    Very Nice Explanations Sir. GNN clearly explained very well for beginners like me. Thank You so much

  • @maxlchn1462
    @maxlchn1462 2 месяца назад

    Sehr gut erklärt und umgesetzt 👏

  • @nicolasf1219
    @nicolasf1219 2 месяца назад

    For some reason I have only 15 million parameters, instead of 17 million. I double checked everything. What could be the reason? Could it simple be because of the latest versions that I use?

  • @alihaidershahhaider3061
    @alihaidershahhaider3061 2 месяца назад

    Ahh thats nice one I found! Can you explain how GNN can be linked to topological data analysis?

  • @datacuriosity
    @datacuriosity 2 месяца назад

    stellargraph is another good library too

  • @SambitTripathy
    @SambitTripathy 2 месяца назад

    After watching many LoRA videos, this one finally makes me satisfied. I have a question: I see in the fine tuning code, they talk about merging lora adapters. What is that? Is this h + = x @ (W_A @ W_B) * alpha ? Can you mix and match adapters to improve the evaluation score?

  • @keremkosif1348
    @keremkosif1348 2 месяца назад

    thanks

  • @adosar7261
    @adosar7261 2 месяца назад

    Isn't the embedding layer redundant? I mean we have then the projection matrices meaning that embedding + projection is a composition of two linear layers.

  • @lorenzoneri-co5hj
    @lorenzoneri-co5hj 2 месяца назад

    (rome is bigger than nyc)

    • @DeepFindr
      @DeepFindr 2 месяца назад

      When it comes to area probably yes :P but not citizens wise

  • @betabias
    @betabias 2 месяца назад

    Keep making content like this, I am sure you will get a very good recognition in the future. Thanks for such amazing content.

  • @deadliftform4920
    @deadliftform4920 2 месяца назад

    best

  • @alivecoding4995
    @alivecoding4995 2 месяца назад

    A great video as usual. Very detailed and comprehensive 😊. Only one thing left me confused: Why isn’t it a problem to making use of Euclidian distance in t-SNE and Umap? You could have assumed they skip it completely.

  • @imadOualid
    @imadOualid 2 месяца назад

    thank u a lot for all ur videos :D can u do one about graphsage ?