Preserve Knowledge
Preserve Knowledge
  • 147
  • 1 108 642

Відео

Tesla AI Andrej Karpathy on Scalability in Autonomous Driving
Переглядів 8 тис.3 роки тому
Tesla's Senior Director of AI, Andrej Karpathy provides a keynote at the CVPR 2020 Scalability in Autonomous Driving Workshop.
Yann LeCun: Turing Award Lecture "The Deep Learning Revolution: The Sequel"
Переглядів 2,3 тис.3 роки тому
Yann LeCun's 2018 ACM A.M. Turing Award Lecture: "The Deep Learning Revolution: The Sequel"
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
Переглядів 7 тис.3 роки тому
Geoffrey Hinton's 2018 ACM A.M. Turing Award Lecture: "The Deep Learning Revolution"
Microsoft CEO Satya Nadella CVPR 2020
Переглядів 8543 роки тому
Harry Shum chats with Satya Nadella at CVPR 2020
David Duvenaud | Reflecting on Neural ODEs | NeurIPS 2019
Переглядів 26 тис.4 роки тому
Original paper: arxiv.org/abs/1806.07366 David's homepage: www.cs.toronto.edu/~duvenaud/ Summary: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-dept...
Yoshua Bengio | From System 1 Deep Learning to System 2 Deep Learning | NeurIPS 2019
Переглядів 39 тис.4 роки тому
Slides: www.iro.umontreal.ca/~bengioy/NeurIPS-11dec2019.pdf Summary: Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement l...
NeurIPS 2019 Test of Time Award - Lin Xiao
Переглядів 3,2 тис.4 роки тому
Dual Averaging Method for Regularized Stochastic Learning and Online Optimization Slides: imgur.com/a/b2AiEUI Paper: papers.nips.cc/paper/3882-dual-averaging-method-for-regularized-stochastic-learning-and-online-optimization.pdf Abstract: We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss fun...
NIPS 2017 Test of Time Award "Machine learning has become alchemy.” | Ali Rahimi, Google
Переглядів 23 тис.6 років тому
NIPS 2017 Test of Time Award "Machine learning has become alchemy.” | Ali Rahimi, Google
Yann LeCun, Christopher Manning on Innate Priors in Deep Learning Systems at Stanford AI
Переглядів 1,5 тис.6 років тому
Yann LeCun is the Chief AI Scientist at Facebook AI Research, a Silver Professor at New York University, and one of the leading voices in AI. He pioneered the early use of convolutional neural networks, which have been central to the modern success of Deep Learning. LeCun has been a leading proponent for the ability of simple but powerful neural architectures to perform sophisticated tasks with...
Meet Geoffrey Hinton, U of T's Godfather of Deep Learning
Переглядів 13 тис.6 років тому
Meet Geoffrey Hinton: U of T Professor Emeritus of computer science, an Engineering Fellow at Google, and Chief Scientific Adviser at the Vector Institute for Artificial Intelligence. In this interview with U of T News, Prof. Hinton discusses his career, the field of artificial intelligence and the importance of funding curiosity-driven scientific research.
Learning Representations: A Challenge for Learning Theory, COLT 2013 | Yann LeCun, NYU
Переглядів 9426 років тому
Slides: videolectures.net/site/normal_dl/tag=800934/colt2013_lecun_theory_01.pdf Perceptual tasks such as vision and audition require the construction of good features, or good internal representations of the input. Deep Learning designates a set of supervised and unsupervised methods to construct feature hierarchies automatically by training systems composed of multiple stages of trainable mod...
Edward: Library for probabilistic modeling, inference, and criticism | Dustin Tran, Columbia Uni
Переглядів 3,4 тис.6 років тому
Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.
Unrolled Generative Adversarial Networks, NIPS 2016 | Luke Metz, Google Brain
Переглядів 2 тис.6 років тому
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein arxiv.org/abs/1611.02163 NIPS 2016 Workshop on Adversarial Training Spotlight We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generat...
Semantic Segmentation using Adversarial Networks, NIPS 2016 | Pauline Luc, Facebook AI Research
Переглядів 4,7 тис.6 років тому
Pauline Luc, Camille Couprie, Soumith Chintala, Jakob Verbeek arxiv.org/abs/1611.08408 NIPS 2016 Workshop on Adversarial Training Spotlight Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network al...
Conditional Image Synthesis with Auxiliary Classifier GANs, NIPS 2016 | Augustus Odena, Google Brain
Переглядів 1,8 тис.6 років тому
Conditional Image Synthesis with Auxiliary Classifier GANs, NIPS 2016 | Augustus Odena, Google Brain
Connecting Generative Adversarial Networks and Actor Critic Methods, NIPS 2016 | David Pfau
Переглядів 1,5 тис.6 років тому
Connecting Generative Adversarial Networks and Actor Critic Methods, NIPS 2016 | David Pfau
Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind
Переглядів 1,7 тис.6 років тому
Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind
Convex Optimization with Abstract Linear Operators, ICCV 2015 | Stephen P. Boyd, Stanford
Переглядів 5 тис.6 років тому
Convex Optimization with Abstract Linear Operators, ICCV 2015 | Stephen P. Boyd, Stanford
A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016
Переглядів 6 тис.6 років тому
A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016
Adversarial Training Methods for Semi-Supervised Text Classification, NIPS 2016 | Andrew M. Dai
Переглядів 2,7 тис.6 років тому
Adversarial Training Methods for Semi-Supervised Text Classification, NIPS 2016 | Andrew M. Dai
Borrowing Ideas from Human Vision, ICCV 2015 PAMI Distinguished Researcher Award | David Lowe, UBC
Переглядів 7106 років тому
Borrowing Ideas from Human Vision, ICCV 2015 PAMI Distinguished Researcher Award | David Lowe, UBC
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
Переглядів 9 тис.6 років тому
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
Переглядів 2,9 тис.6 років тому
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
It's Learning All the Way Down, ICCV 2015 PAMI Distinguished Researcher Award | Yann LeCun, NYU
Переглядів 2976 років тому
It's Learning All the Way Down, ICCV 2015 PAMI Distinguished Researcher Award | Yann LeCun, NYU
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
Переглядів 151 тис.6 років тому
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
Deep Learning for Predicting Human Strategic Behavior, NIPS 2016 | Jason Hartford, UBC
Переглядів 3 тис.6 років тому
Deep Learning for Predicting Human Strategic Behavior, NIPS 2016 | Jason Hartford, UBC
Predictive Learning, NIPS 2016 | Yann LeCun, Facebook Research
Переглядів 7 тис.6 років тому
Predictive Learning, NIPS 2016 | Yann LeCun, Facebook Research
Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto
Переглядів 2,3 тис.6 років тому
Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS 2016
Переглядів 3,4 тис.6 років тому
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS 2016

КОМЕНТАРІ

  • @sorbajitgoswami4739
    @sorbajitgoswami4739 4 дні тому

    And still the building collapse

  • @jackxiao8140
    @jackxiao8140 25 днів тому

    Love the little laugh at 12:58

  • @LifeRepository
    @LifeRepository 25 днів тому

    00:05 Ian Goodfellow's journey into deep learning 02:02 Ian Goodfellow innovated GANs for generative modeling. 03:51 Goodfellow's determination to make his idea work paid off. 05:37 Games are at an important crossroads in deep learning 07:28 Importance of mastering basic math for deep learning 09:18 Evolution of AI and Deep Learning 11:19 Entering AI field without needing a PhD. 13:06 Importance of building security into machine learning algorithms from the start Crafted by Merlin AI.

  • @abdAlmajedSaleh
    @abdAlmajedSaleh Місяць тому

    I didn't know it was dog network.

  • @muhammadrayanmansoor4301
    @muhammadrayanmansoor4301 2 місяці тому

    This man is great ❤

  • @vq8gef32
    @vq8gef32 2 місяці тому

    He is a real hero, I am watching his lessons : Love + AI === Andrej

  • @alaad1009
    @alaad1009 5 місяців тому

    Two amazing teachers !

  • @christiansmith-of7dt
    @christiansmith-of7dt 6 місяців тому

    I dont even think about russia

  • @zardi9083
    @zardi9083 6 місяців тому

    Set playback speed to 2x to fully understand what is happening in Andrej's brain

  • @nastaran1010
    @nastaran1010 7 місяців тому

    bad presentation...fast, sloppily speaking

  • @daffertube
    @daffertube 7 місяців тому

    Great talk. Super underrated

  • @revimfadli4666
    @revimfadli4666 7 місяців тому

    Looks like it greatly outperforms LSTMs, so I wonder what's keeping it from being the next gold standard. Also bit of a shame it only blew up after Transformers replaced RNNs for mainstream purposes. With the recent surge of graph nets and massively multi agent learning, hope it'll get another chance to be used

  • @onamixt
    @onamixt 8 місяців тому

    I watched the video as a part of Deep Learning Specialization. Sadly, it's way way over my head to comprehend much of what was said in the video.

  • @pocok5000
    @pocok5000 8 місяців тому

    the good old days when Musk wasn't a total nutjob

  • @jsfnnyc
    @jsfnnyc 8 місяців тому

    Best research advice ever!! "Read the literature, but not too much of it."

  • @mermich
    @mermich 8 місяців тому

    can I say that Ian Goodfellow is the GOAT in modern computer science? T.T

  • @user-sd6lc2qn5q
    @user-sd6lc2qn5q 9 місяців тому

    You give what is a gem to the people around the world Sir Salute from Cambodia

  • @user-sd6lc2qn5q
    @user-sd6lc2qn5q 9 місяців тому

    Thanks you so much Dr.

  • @surkewrasoul4711
    @surkewrasoul4711 9 місяців тому

    Hey And , Do you still accept donations by any chance, I am hoping for 720p videos from now on.

  • @ChandlerRandolph-yc5re
    @ChandlerRandolph-yc5re 10 місяців тому

    very informative!

  • @suissdagout5153
    @suissdagout5153 10 місяців тому

    Missiles making

  • @postnetworkacademy
    @postnetworkacademy 11 місяців тому

    Great legends are talking on great things.

  • @briancase9527
    @briancase9527 11 місяців тому

    I so agree with Hinton: have an idea and go for it. I took this approach with something other than AI, but it also worked. What do I mean? I mean, even though my idea wasn't revolutionary and totally worthwhile, I LEARNED A LOT by just going for it and programming the heck out of it. The practical experience I gained served me well--very well--in my first jobs. Remember: your purpose is to learn, and you can do that following your intuition--which is fun--or following someone else's--which is less fun.

  • @fabianmarin8514
    @fabianmarin8514 11 місяців тому

    The two folks from which I've learned the most about AI. Thanks so much!

  • @PaulHigginbothamSr
    @PaulHigginbothamSr 11 місяців тому

    I think the difference between wake and sleep is during sleep it is in the testing phase and during wake it is the operative phase of learning.

  • @Desu_Desu
    @Desu_Desu 11 місяців тому

    This is amazing concept, we’re keeping borrowing solutions from biological systems, but no wonder, they had million of years to solve all those problems before us already

  • @wk4240
    @wk4240 Рік тому

    Seriously doubt Geoffrey Hinton considers himself a hero - more like Dr. Frankenstein now. He's doing his part to spread the word on the dangers of reliance on AI.

  • @YashVinayvanshi-nq2ug
    @YashVinayvanshi-nq2ug Рік тому

    Great

  • @yunoletmehaveaname
    @yunoletmehaveaname Рік тому

    Another way I learned to do textual substitution would be the same as saying x[x := (x v x+1)][ x := (x v x+1)] in which case you would get the first substitution (x v x+1)[ x := (x v x+1)] Then the second substitution (x v (x) +1) v (x+1 v (x+1) +1) Then combining and removing unnecessary parentheses and repeated x+1 gets the same value as the video x v x+1 v x+2

  • @gangfang8835
    @gangfang8835 Рік тому

    It took me a month to fully understand everything he discussed in this presentation (at a high level). I think this is the future. Would love to hang out and discuss if anyone is in Toronto.

  • @yunoletmehaveaname
    @yunoletmehaveaname Рік тому

    I've never herd someone speak so passionately about starting at 0

  • @Abhishekkumar-qj6hb
    @Abhishekkumar-qj6hb Рік тому

    So basically you guys are utilising the fleet for getting varied data and also ensuring if the model works fine and if it fails then again quickly train the model on those groups of datasets to make the model more robust!!! Interesting However how far are we from the moment where we kind of act well as humans do with just very few datasets ??? Cuz what we are doing is statistical inference on the basis of large datasets ! So it's basically good datasets and good compute as mentioned earlier

  • @futureprogress
    @futureprogress Рік тому

    Maybe the cake... wasn't a lie

  • @smithwill9952
    @smithwill9952 Рік тому

    Genius play respect to genius.

  • @kavorka8855
    @kavorka8855 Рік тому

    What a charlatan, Elon Musk! This guy is basically full of BS, knows absolutely NOTHING about AI other than basic, layman info, yet he finds himself everywhere, pretending to know things. The real founders of Tesla talked about his ego and attention seeking mindset.

  • @Gabcikovo
    @Gabcikovo Рік тому

    10:58 global community

  • @shantanuraj7086
    @shantanuraj7086 Рік тому

    Excellent interview. Down to earth, straight, and lot of information in this interview. Great work Andrew Ng with your contributions.

  • @calmhorizons
    @calmhorizons Рік тому

    Parachuting in from the future to confirm that we have now unleashed this alchemy onto the public in pursuit of seed capital. What a time to be alive...

  • @Gabcikovo
    @Gabcikovo Рік тому

    2:48 both players are neural networks

    • @Gabcikovo
      @Gabcikovo Рік тому

      5:16

    • @Gabcikovo
      @Gabcikovo Рік тому

      5:26 the goal of the generator is to fool the discriminator

    • @Gabcikovo
      @Gabcikovo Рік тому

      5:30 eventually the generator is forced to produce data as true as possible

    • @Gabcikovo
      @Gabcikovo Рік тому

      Uhrik and Putin cried bitter tears

    • @Gabcikovo
      @Gabcikovo Рік тому

      3:10 one of the players is trained :) to do the best as possible on the worst possible input

  • @theLowestPointInMyLife
    @theLowestPointInMyLife Рік тому

    Walt and Gale vibes

  • @Gabcikovo
    @Gabcikovo Рік тому

    38:10 a thought is just a great big vector of neural activity

    • @Gabcikovo
      @Gabcikovo Рік тому

      38:19 people who thought that thoughts were symbolic expressions made a huge mistake.. what comes in is a string of words, and what comes out is a string of words, and because of that, strings of words are the obvious way to represent things, so they thought what must be in between was a string of words or something alike.. Hinton thinks there's nothing like a string of words in between, he thinks thinking of it as of some kind of language is as silly as the idea that understanding the layout of a spacial scene must be in pixels :))

  • @Gabcikovo
    @Gabcikovo Рік тому

    35:08 our relationship to computers has changed.. instead of programming them, we show them, and they figure it out

  • @Gabcikovo
    @Gabcikovo Рік тому

    18:00 2007 ignored Hinton and Bengio picked it up layer on

  • @Gabcikovo
    @Gabcikovo Рік тому

    0:19 Godfather 😎

  • @hmthanhgm
    @hmthanhgm Рік тому

    Geoff Hinton is legendary

  • @shubharthaksangharsha6248

    2 legends in one frame

  • @riteshajoodha4401
    @riteshajoodha4401 Рік тому

    Wonderful, always a pleasure to hear you speak!

  • @rb8049
    @rb8049 Рік тому

    So much intuition.

  • @-mwolf
    @-mwolf Рік тому

    10:55

  • @justchary
    @justchary Рік тому

    What an amazing presentation! Thank you