목록semantic segmentation (1)
On the journey of
[논문읽기] BEiT : BERT Pre-training of Image Transformers
Original Paper ) BEiT: https://arxiv.org/abs/2106.08254 BEiT: BERT Pre-Training of Image Transformers We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pre arxiv.org Contribution BERT의 Masked Lan..
Experiences & Study/VQA
2023. 10. 17. 07:48