목록semantic segmentation (1)
On the journey of
![](http://i1.daumcdn.net/thumb/C150x150.fwebp.q85/?fname=https://blog.kakaocdn.net/dn/VkZa4/btsytM3qxBI/7wUj9tz7LNY2HaKpLVA3tK/img.png)
Original Paper ) BEiT: https://arxiv.org/abs/2106.08254 BEiT: BERT Pre-Training of Image Transformers We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pre arxiv.org Contribution BERT의 Masked Lan..
Experiences & Study/VQA
2023. 10. 17. 07:48