Abstract: Multimodal large language models (MLLMs) have enabled open-world visual understanding by injecting visual input as extra tokens into large language models (LLMs) as contexts. However, when ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results