SLMming Down Latency: How NVIDIA’s First On-Device Small Language Model Makes Digital Humans More Lifelike (2024)

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC and workstation users.

At Gamescom this week, NVIDIA announced that NVIDIA ACE — a suite of technologies for bringing digital humans to life with generative AI — now includes the company’s first on-device small language model (SLM), powered locally by RTX AI.

The model, called Nemotron-4 4B Instruct, provides better role-play, retrieval-augmented generation and function-calling capabilities, so game characters can more intuitively comprehend player instructions, respond to gamers, and perform more accurate and relevant actions.

Available as an NVIDIA NIM microservice for cloud and on-device deployment by game developers, the model is optimized for low memory usage, offering faster response times and providing developers a way to take advantage of over 100 million GeForce RTX-powered PCs and laptops and NVIDIA RTX-powered workstations.

The SLM Advantage

An AI model’s accuracy and performance depends on the size and quality of the dataset used for training. Large language models are trained on vast amounts of data, but are typically general-purpose and contain excess information for most uses.

SLMs, on the other hand, focus on specific use cases. So even with less data, they’re capable of delivering more accurate responses, more quickly — critical elements for conversing naturally with digital humans.

Nemotron-4 4B was first distilled from the larger Nemotron-4 15B LLM. This process requires the smaller model, called a “student,” to mimic the outputs of the larger model, appropriately called a “teacher.” During this process, noncritical outputs of the student model are pruned or removed to reduce the parameter size of the model. Then, the SLM is quantized, which reduces the precision of the model’s weights.

With fewer parameters and less precision, Nemotron-4 4B has a lower memory footprint and faster time to first token — how quickly a response begins — than the larger Nemotron-4 LLM while still maintaining a high level of accuracy due to distillation. Its smaller memory footprint also means games and apps that integrate the NIM microservice can run locally on more of the GeForce RTX AI PCs and laptops and NVIDIA RTX AI workstations that consumers own today.

This new, optimized SLM is also purpose-built with instruction tuning, a technique for fine-tuning models on instructional prompts to better perform specific tasks. This can be seen in Mecha BREAK, a video game in which players can converse with a mechanic game character and instruct it to switch and customize mechs.

ACEs Up

ACE NIM microservices allow developers to deploy state-of-the-art generative AI models through the cloud or on RTX AI PCs and workstations to bring AI to their games and applications. With ACE NIM microservices, non-playable characters (NPCs) can dynamically interact and converse with players in the game in real time.

ACE consists of key AI models for speech-to-text, language, text-to-speech and facial animation. It’s also modular, allowing developers to choose the NIM microservice needed for each element in their particular process.

NVIDIA Riva automatic speech recognition (ASR) processes a user’s spoken language and uses AI to deliver a highly accurate transcription in real time. The technology builds fully customizable conversational AI pipelines using GPU-accelerated multilingual speech and translation microservices. Other supported ASRs include OpenAI’s Whisper, a open-source neural net that approaches human-level robustness and accuracy on English speech recognition.

Once translated to digital text, the transcription goes into an LLM — such as Google’s Gemma, Meta’s Llama 3 or now NVIDIA Nemotron-4 4B — to start generating a response to the user’s original voice input.

Next, another piece of Riva technology — text-to-speech — generates an audio response. ElevenLabs’ proprietary AI speech and voice technology is also supported and has been demoed as part of ACE, as seen in the above demo.

Finally, NVIDIA Audio2Face (A2F) generates facial expressions that can be synced to dialogue in many languages. With the microservice, digital avatars can display dynamic, realistic emotions streamed live or baked in during post-processing.

The AI network automatically animates face, eyes, mouth, tongue and head motions to match the selected emotional range and level of intensity. And A2F can automatically infer emotion directly from an audio clip.

Finally, the full character or digital human is animated in a renderer, like Unreal Engine or the NVIDIA Omniverse platform.

AI That’s NIMble

In addition to its modular support for various NVIDIA-powered and third-party AI models, ACE allows developers to run inference for each model in the cloud or locally on RTX AI PCs and workstations.

The NVIDIA AI Inference Manager software development kit allows for hybrid inference based on various needs such as experience, workload and costs. It streamlines AI model deployment and integration for PC application developers by preconfiguring the PC with the necessary AI models, engines and dependencies. Apps and games can then orchestrate inference seamlessly across a PC or workstation to the cloud.

ACE NIM microservices run locally on RTX AI PCs and workstations, as well as in the cloud. Current microservices running locally include Audio2Face, in the Covert Protocol tech demo, and the new Nemotron-4 4B Instruct and Whisper ASR in Mecha BREAK.

To Infinity and Beyond

Digital humans go far beyond NPCs in games. At last month’s SIGGRAPH conference, NVIDIA previewed “James,” an interactive digital human that can connect with people using emotions, humor and more. James is based on a customer-service workflow using ACE.

Changes in communication methods between humans and technology over the decades eventually led to the creation of digital humans. The future of the human-computer interface will have a friendly face and require no physical inputs.

Digital humans drive more engaging and natural interactions. According to Gartner, 80% of conversational offerings will embed generative AI by 2025, and 75% of customer-facing applications will have conversational AI with emotion. Digital humans will transform multiple industries and use cases beyond gaming, including customer service, healthcare, retail, telepresence and robotics.

Users can get a glimpse of this future now by interacting with James in real time at ai.nvidia.com.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

SLMming Down Latency: How NVIDIA’s First On-Device Small Language Model Makes Digital Humans More Lifelike (2024)

References

Top Articles
News - Bronze für Deutschland
Leon Carl Conley - Wade Family Funeral Home
Christian McCaffrey loses fumble to open Super Bowl LVIII
Craigslist Warren Michigan Free Stuff
Cars & Trucks - By Owner near Kissimmee, FL - craigslist
Phone Number For Walmart Automotive Department
Obituaries
Volstate Portal
Jscc Jweb
Walgreens On Nacogdoches And O'connor
Brenna Percy Reddit
Spelunking The Den Wow
Hartford Healthcare Employee Tools
ocala cars & trucks - by owner - craigslist
Springfield Mo Craiglist
Nyuonsite
Dr. med. Uta Krieg-Oehme - Lesen Sie Erfahrungsberichte und vereinbaren Sie einen Termin
Love In The Air Ep 9 Eng Sub Dailymotion
London Ups Store
Nhl Wikia
Union Ironworkers Job Hotline
Divina Rapsing
Jalapeno Grill Ponca City Menu
Wausau Marketplace
Halo Worth Animal Jam
Long Island Jobs Craigslist
Myhr North Memorial
Dallas Mavericks 110-120 Golden State Warriors: Thompson leads Warriors to Finals, summary score, stats, highlights | Game 5 Western Conference Finals
Is Windbound Multiplayer
Azur Lane High Efficiency Combat Logistics Plan
Www Va Lottery Com Result
BJ 이름 찾는다 꼭 도와줘라 | 짤방 | 일베저장소
Skymovieshd.ib
The Clapping Song Lyrics by Belle Stars
Kamzz Llc
Beth Moore 2023
Watchdocumentaries Gun Mayhem 2
2016 Honda Accord Belt Diagram
Blue Beetle Movie Tickets and Showtimes Near Me | Regal
School Tool / School Tool Parent Portal
Gets Less Antsy Crossword Clue
Dadeclerk
The Best Restaurants in Dublin - The MICHELIN Guide
Dee Dee Blanchard Crime Scene Photos
Electronic Music Duo Daft Punk Announces Split After Nearly 3 Decades
Low Tide In Twilight Manga Chapter 53
Registrar Lls
Tunica Inmate Roster Release
Pekin Soccer Tournament
Sour OG is a chill recreational strain -- just have healthy snacks nearby (cannabis review)
412Doctors
O'reilly's On Marbach
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 6352

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.