Inworld AI is a startup working on a platform that allows users to construct AI-driven virtual avatars for the metaverse. They recently announced its $7 million in seed funding.
Ilya Gelfenbeyn, co-founder and CEO of Inworld, “Inworld AI is a platform for developing, basically, brains for virtual characters” to fill virtual settings such as the metaverse, VR, and AR worlds in an exclusive interview. “What we offer is a collection of tools that enable developers to add brains and create these characters for the world, in a variety of settings.”
In order to create immersive characters, Inworld AI employs a combination of AI technologies. Which are including natural language understanding and processing, optical character recognition, reinforcement learning, and conversational AI. In order to create sophisticated virtual characters that can even answer questions and hold conversations.
Inworld AI isn’t working on a visual avatar design solution; Instead, it wants to build an AI development platform that will allow companies who make digital avatars and virtual characters to add more advanced communication to their visual designs.
The platform’s ultimate purpose, according to Gelfenbeyn, is to provide a platform for visual avatar providers and organizations to create “characters that can interact naturally with broad and entirely open discussion,”. Although speech is only the beginning in terms of AI communication capacities.
“Virtual world characters should not be confined to speech,” writes Gelfenbeyn. “But should be able to engage with numerous modalities utilized by humans, such as facial gestures, body language, emotions, and physical interactions.”
Using AI minds to improve the metaverse experience.
“Our technological stack is built around the human brain. Perception, cognition, and behavior are the three main components. Perception is concerned with receiving information and comprehending the environment and other agents through the use of senses. Such as hearing and seeing “Gelfenbeyn clarifies.
The company employs a complicated mix of speech-to-text conversion, rule engines, natural language comprehension, OCR, and event triggers. They enable virtual characters to perceive the world both audibly and visually.
For MetaNews.