We humans talk in different languages, but when we want to travel, mix, and engage, we all need to speak one language, like English.
Each AI embedding model creates embeddings in different ways. Different encodings require different decodings in models. But they all need to speak to each other.
What will be the unified embedding “language” among AIs?
I do not know the answer, but researchers have found some ways:
They try to align different embeddings into a shared space so models can understand each other. Some use linear transformations, some use neural networks, and some believe there is a universal geometry hidden in all embeddings.
It is not solved yet, but one day, AIs may have their own “English” for embeddings.
What I know is when that time arrives and problem of semantic interoperability between AI model gets solved, all AI models will talk to each other in the backend. We will see seamless backend integrations across all apps, enabling true transformation.

Leave a comment