The “A.I.D.J.” is the third AI, neural-net driven, engine-hosted, procedural virtual character [ntm] has developed for Intel.

The full set of AI Characters – a guitarist, bassist, and DJ – are the centerpiece of Intel’s promotional shows to advertise their latest AI processing chip, the Movidius Compute Stick.

In 2018, our Avatars were on stage at Intel’s CES Keynote, TEDx, AI DevCon San Francisco, Computex Taiwan, and AI DevCon India.

The AI system operates in a “call and response” format. First, a human player lays down a sequence of notes on a keyboard (the “call”). That sequence is processed by a musical neural net trained by Intel’s AI team to ingest and output music of the intended style (i.e. jazz, rock, latin). Then, the neural net generates and plays a new, unique sequence of notes, that takes cues from the rhythm and range of the call sequence (the “response”). The human responds, turning this feedback loop into an improvisational jam session between a human and AI musician.

The DJ character is driven in Unity by real-time state and data streams coming from Intel’s Music AI Engine.

He can dance, turn knobs and buttons, spin turntables, pump up the crowd, listen to his headphones, and look at his on-stage human counterpart, all at custom levels of intensity, without the system ever breaking or creating an unrealistic pose.

As the beat intensifies, so does the avatar’s posture. When the audio style shifts, so does the dancing style. When there’s a particularly exciting riff, the DJ pumps up the crowd.

Reactive VFX and shaders pulse to the flow of data processing and the musical jam session. For example, when the character is listening to musical input from a human counterpart, strands of neural pathways distort and glow as the AI processor “learns” then comes up with subsequent generative musical phrases.

As the character plays what the Movidius processor has just generated, custom shaders in the arms then distort and glow with increasing intensity to the beat.

The DJ podium also reacts to incoming MIDI note sequences, lighting up emissive shaders where each light corresponding to a note or range of notes.

Powering these reactive particles, shaders, and animation system is a sophisticated OSC data pipeline.

The incoming AI music track, which is generated by Intel’s musical neural net AI, is sent to our system via a Max/MSP patch and ingested into a local TouchDesigner app. The Touch App uses audio data to generate realistic gestures and animation cues to feed into the Unity application, creating the performative visual flair that is seen on stage.