What are Non-Visual and Text-Based Metaverses?

In the rapidly evolving digital landscape, the concept of the metaverse has taken center stage. This article explores two unique types of metaverses: non-visual and text-based. Non-visual metaverses rely on senses other than sight, such as sound, smell, and touch, to create immersive experiences for users.

Sound-based metaverses, in particular, are gaining traction, with technologies like spatial audio and ambisonics creating 3D audio experiences. Meanwhile, text-based metaverses, which date back to the 1980s, use primarily text-based communication in virtual worlds. The incorporation of AI and ML models is set to revolutionize these text-based metaverses further.

Non-visual metaverse: In a non-visual metaverse, a user relies on non-visual aspects such as sound, smell and touch to interact with the content. Currently, most of the non-visual metaverse projects are geared toward sound, as that’s the easiest to incorporate. Spatial audio technology is the key component of sound-based metaverses. Technologies like Dolby Atmos and Waves NX are being utilized by metaverse platforms to give users a 3D audio experience. 

Some other audio techniques that are being utilized by metaverse platforms are:

  • Ambisonics: Use of four or more microphones to capture audio from all directions, which can then be decoded and rendered to create a 360-degree soundscape.
  • Object-based audio: A technology that allows audio to be placed in a virtual 3D space, with the listener’s position determining the audio mix and placement.
  • Head-related transfer function (HRTF): A technique that uses filters to simulate the way sound travels through the human head and ears, creating a more realistic spatial audio experience.
  • Binaural audio: A method of recording audio using two microphones that are placed inside a dummy head, creating a more immersive and accurate spatial audio experience.

There is still a lot to be desired from other forms of non-visual metaverse such as smell- and touch-based; however, some small strides have been made, with OVR technologies planning to release a headset later in 2023 that uses scents scents to create different aromas. Similarly, Emerge is creating a sensor that would allow people to feel the texture of an object in the metaverse.

Text-based metaverse: Text-based metaverse has been one of the earliest kinds of metaverse formats, with programs such as multi-user shared hallucinations (MUSHes) and multi-user dungeons (MUDs) having been around since the 1980s. These are virtual worlds that rely primarily on text-based communication, rather than visual or audio interfaces. MUD and MUSHes were followed by MUD object-oriented (MOOs), which allowed users to use programming scripts to control the behavior of objects and the environment. 

Incorporating AI and ML models is the next big step for the text-based metaverse, particularly after the arrival of chatGPT. Many users have tried combining chatGPT with the multi-user dungeons to achieve intriguing results. AIDungeon is one of the text-based metaverses that is actively using AI to provide users with a unique experience. 

In conclusion, the metaverse is a diverse and dynamic digital frontier. Non-visual metaverses are pushing the boundaries of sensory engagement, with sound currently leading the way and promising developments in smell and touch-based experiences. On the other hand, the resurgence of text-based metaverses, bolstered by AI and ML advancements, is redefining interactive storytelling and user experience. As we continue to explore and innovate within these spaces, the possibilities for the metaverse seem boundless.