목록attention (2)
On the journey of

Original Paper) https://arxiv.org/abs/1606.03556 Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? We conduct large-scale studies on `human attention' in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that r..

* 해당 포스팅은 Attention 구조 및 Transformer에 대한 논의를.. 좀더 잘 이해하기 위해 공부하고 쓰는 글입니다. URP에서 본격적으로 다룬 내용은 아님을 밝혀둡니다 :) References(Github & Huggingface) https://nlpinkorean.github.io/illustrated-transformer/ https://github.com/hyunwoongko/transformer/blob/master/models/layers/multi_head_attention.py https://github.com/rwightman/pytorch-image-models/blob/a520da9b495422bc773fb5dfe10819acb8bd7c5c/timm/models/vis..