一些关键帧抽取笔记
关键帧抽取
Posted on Edited on In MachineLearning Disqus:
Symbols count in article: 1.1k Reading time ≈ 3 mins.
Symbols count in article: 1.1k Reading time ≈ 3 mins.
一些关键帧抽取笔记
论文《Less is More: CLIP BERT for Video-and-Language Learning via Sparse Sampling》笔记
论文《HiT: Hierarchical Transformer with Momentum Contrast for Video-Text Retrieval》笔记
CIKM-QQ浏览器 联合举办的 2021AI算法大赛,赛道一,多模态视频相似度,赛后top分享会笔记。
对比学习 Contrastive Learning笔记
论文《Support-set bottlenecks for video-text representation learning》笔记
论文《T2VLAD: Global-Local Sequence Alignment for Text-Video Retrieval》笔记
VLAD,NetVLAD和到NeXtVlad笔记,搬运知乎
论文《HANet: Hierarchical Alignment Networks for Video-Text Retrieval》笔记
《Transformers in Vision: A Survey》部分笔记
一些常见的论文写作词汇和句式,拯救写作困难症
看到的一篇讲排序学习的博客,转自LTR精排序
论文《Airbert: In-domain Pretraining for Vision-and-Language Navigation》阅读笔记
模仿学习(Imitation Learning),属于强化学习(Reinforcement Learning)的范畴。