본문 바로가기

컴퓨터비전63

[논문리뷰] AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE(Vision Transformer) 참고 자료 [논문리뷰] Attention is All you need의 이해 소개 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an atten lcyking.tistory.com [논문리뷰] BERT(Pre-training of Deep Bidirectional Transformers forLanguage Understand.. 2023. 6. 15.
[논문리뷰] Information Maximizing Generative Adversarial Networks (InfoGAN)의 이해 InfoGAN 논문 링크 InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes t arxiv.org GAN의 이해 링크 [딥러닝].. 2022. 11. 18.
[논문리뷰] DEEP CONVOLUTIONALGENERATIVE ADVERSARIAL NETWORKS(DCGANs)의 이해 소개 \( GAN \)의 이해 링크 [딥러닝] Generative Adversarial Nets(GANs)의 이해 소개 Generative Adversarial Networks We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model lcyking.tistory.com \( CNN \)의 이해 링크 [딥러닝] CNN(Convolutional Neural Network)의 이해 .. 2022. 11. 14.
[논문리뷰] Generative Adversarial Nets(GANs)의 이해 소개 Generative Adversarial Networks We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that arxiv.org Adversarial의 사전적 의미는 적대적이라는 뜻을 갖습니다. 말 그대로 두 모델이 대립하면서 동시에 학습시킵니다. Fake Image를 만들어내는 Generator, 기.. 2022. 11. 8.