>> Thanks, everyone
for being here today. I appreciate you all joining us for a great conversation on Unity's
XR Interaction Toolkit. My name is Doreen Brown. I'm a Product Manager at Unity, focused on the XR Input and
Interactions initiatives. Prior to working at Unity, I actually was at
Microsoft working on MR initiatives within the org. >> I'm David Rudell, I'm the
Engineering Manager over at Unity in charge of XR
Input and Interactions. Things like the XR Toolkit that
we're going to talk abo
ut today. >> Why don't we first start with an overview of how we're going to spend roughly the next 40 minutes? We'll first start with an overview
of the XR Interaction Toolkit, what it is, why it's important, and how you can start using
it for your next project. We'll then get hands-on
with two demos, the first demo being a video of an app that you can build with
the XR Interaction Toolkit, the follow-up being a demo in the Unity Editor of the
app we'll show you, showcasing how and why the XR I
nteraction Toolkit is
critical to the success of that app. Finally, we'll follow
up with the roadmap. What's new, upcoming release plans, and overall feedback
we're receiving on the toolkit as we move forward
with our future plans. What is the XR Interaction
Toolkit or as you'll refer to talk about it
pretty frequently, XRI? XRI is a Unity-made
and managed package that provides a high-level
interaction system, giving creators a
jumpstart of building interactive and intuitive
AR and VR content. T
his includes things like a robust framework with
components and APIs to handle a bulk of the primary boilerplate needed for creating this
highly interactive content. All of this is done with a
layer on top of input to deal with delivering actions and
events to the interaction layer. This brings support to XR
devices that implement a Unity XR SDK, including OpenXR. We've also worked hard
to expose input via both the legacy input manager
and the input system. But that's a whole lot of words to des
cribe something
that's very visual. So why don't we take a look at
an app that's built using XRI? [MUSIC] I'll hand it off to Dave, who will go into depth
about how the app works and how you can use XRI to
build your own projects. >> One of the things
that I want to go over is what is included with XRI. What do we give you to get started to make your
building journey easier? First of all, I want to explain where XRI sits on the stack in
terms of everything in Unity. As you can see from the stack
image, we sit right below the
application layer, so your VR and air
applications and we sit next to the other packages
that are critical to building your applications
such as AR Foundation and Mars. Below that, you can see the
excerpt plug-in framework, which exposes all of
the nitty-gritty. So you can build your
application with XRI and not even have to worry
about anything below there. Or you can, if you want to, if you really need to get low level in the DLLs and
things like that, you can. B
ut we try to abstract away the
harder stuff that lets you get creating quicker and easier while maintaining the
greatest platform reach, and so you can see exactly how
many platforms we support. As you start to build your AR
and VR games and applications, what core components are really
required and provided by XRI? To begin with, you
have your XR origin, we've wrapped this with some nice creation tools that
allows you to create an origin, creates your play area along with
your left hand and rig
ht hand, and then controllers depending on which input model
you're choosing. The XR controllers, you can see we have
two of them there; we have the action-based
controller which is wired directly to the new
input system package. We also have the device based which is wired to the
legacy input manager. But keep in mind that if
you are using OpenXR, we only currently have support
for the new input system, so you'll be using the
action-based manager or action-based controllers. Below that, we have
the
actual interaction model. This is a pattern that's starting to show
up a lot in the industry, we have interactors
and interactables. What is an interactor? This is a component that lets you perform actions upon anything in the scene that you
wish to interact with, and we tag those things with
an interactable component. Our interactable components let
us do things like pick stuff up, affect physics, place
things in sockets, things like that, and
the interactors are what we use typically on o
ur controllers to deal
with those interactables. We have three main flavors, the two of them that are the most important are probably
the direct interactor, that's for near-field interaction, so things like picking things up, pushing things, manipulating
physics in the near-field, and then we have the ray interactor. So grabbing things from a distance, object manipulation,
things like that, as well as UI interaction. The last one on the list
is the socket interactor. This is a special case, this
works with the
grab interactables, but this can be placed
on almost anything in the scene where you want to place a grab intractable or any kind of intractable that you want to place in a specific socket. In the example you saw, like a key in a keyhole or a
battery and a battery holder, things like that, so where you want that pairing of
interaction to happen. The thing that sits on top of this, that manages all of that is what
we call our interaction manager. This handles the state of
all of t
he interaction loop. So it goes through each interactor each interactable, each frame, says which things are actually designed to
work together right now, which things are supposed
to be interacting? It obeys what we call our
interaction layer mask. This works exactly like
the Unity physics layers, so anything that you
want to actually interact together will
be on a layer together, things that you don't want,
you put on a separate layer. This can be things
like the teleport, so if you want to te
leport
to a specific location or you want to
designate interactables that are only for teleportation, you tag them on an interaction
they are for teleport. If you want to deal with things in the near-field versus the far-field, you can set up interaction layers specifically designed to handle
near and far interactions. >> On top of that, now that we've defined the main interaction model, now we have additional capabilities
that we've added into XRI that we feel are essential
to any XR content. F
irst and foremost is
locomotion support. We built in some teleportation
capabilities with directionality coming
soon to be able to point which direction you
want to land when you're coming out of your teleport. We've got continuous move in turn, letting you use your
thumbstick to move in a first person control throughout the scene and then snap
turn and turn around. With customization to snap between the different angles
and that's customizable, so if you want to turn at
a 45 degree angle or a 9
0 that's totally up to you. In addition locomotion, we've added haptic feedback into
our interactions. The inner actors have configuration
for different events. When you're actually wanting
to trigger haptic feedback. Hovering on things,
picking things up, all of these things, you can customize how the haptic
feedback is triggered. The intensity, the
duration that will happen, which hand is going to vibrate on, and then also everything
is fully customizable. You can build on top
of that and you
can build much more extravagant
haptic models if you wanted to. In addition to haptic feedback, we've added some basic
visual feedback. We have things like
hovering on gravel balls, hovering in the sockets where
things are going to go. I think you saw the video
where the battery was hovering before you let it go
it shows like a blue outline. All of those affordances
and things are built in. We've also added radical system, so anything that you're
pointing to in the scene, UI, things like that, w
e've got customizable
radicals for all of the different flavors and you
can customize them per grabable, not just for the entire system. Anything that you want to have a
special cases is very flexible. On top of that, we've got
our uGUI canvas support. Unity has two of the UI
toolkit and the uGUI. Currently, we only support uGUI, we've built a custom XR input
manager or input module that handles all of the
interaction between our XR toolkit and the event system. In addition to that,
we've also b
uilt in support for things like keyboards, mice, joysticks, gamepads,
things like that. You have a little bit
more flexibility into how the UI is navigated
when not in XR. A new feature that
we've added recently is what we call intention filtering. This isn't a new concept
that we've put in there. This is essentially filters that allow you to define a specific
heuristics for selecting things. When you're hovering
between two objects evenly, what is going to determine which
one you're actually pi
cking? It could be the last one you picked. It could be the one that you're
closer to on a collision, or it could be the one
that you're looking at. These intention filters we've added so you can customize
the way that people are interacting with
the objects and what they most likely intend to do. If they're looking at an object and they're holding the
controller between two the assumption is that they're actually trying to pick up that one that
they're looking at. All of these things
go into ou
r intention filtering engine and
they're all scored. You can actually have all of them stacked on top of
each other and scored depending on which priority you
want these intentions to be. That's something we just
released last month. Highly encourage you
to check it out. The next thing that we added
recently is a comfort mode sample. This is a simple tunneling vignette. It just shrouds your view. You can change the aperture, the color, the ease in
time, ease out time. This just drops as a prefab
under your main camera and lets
you customize it from there. And then on top of all of this, just wanted to point out again,
the robustic extensibility. If you've been paying attention
to the MRTK 3 release, you've seen that it's
built on top of XRI, and that's a testament to exactly
what we've built it to do, is to be extended and built upon as a framework for you
to get going quicker. If you're doing AR, which I assume a lot of
you in this room are, you're going to want to install AR Foundati
on in
conjunction with XRI, and this will actually
enable additional AR features specific to the XRI module. If you have not dealt with AR
Foundation in Unity before, AR Foundation is one of our
unity built packages that allows creators to build AR
games and apps on top of the well-defined AR platform, such as AR Core for Android, AR Kit for iOS, HoloLens
and Magic Leap. The AR functionality that
we provide with XRI on top of AR Foundation are things
like an AR gesture system, which maps screen
touches
to gesture events. It also allows for placement of AR interactables as virtual objects
into your real world mapping. In addition to that, we have a translation layer
that takes AR gestures and interactivals and
translates common gestures such as placement selection, translation rotation
into manipulations of your AR objects in your AR world. Last, we have AR annotations, allowing you to mark annotations
on AR objects in the real world, so other people who
are experiencing it will see wha
t annotations you left. In many of these features of
XRI and AR Foundation together are actually showcased in
Unity's AR Companion App, which just released a
couple of months ago. On top of being a really cool
showcase of what you can do, it's a very useful tool for AR developers and I highly
encourage you to check it out. It's free on the Apple App Store. But now we have some time, and what I wanted to do is
dive into the editor and show you how things are wired up
in the sample that you saw ea
rlier I think that'll help. I'll Show exactly how
we've structured things and go from there.
Let's take a look here. I blew this up, so hopefully
everybody can see this. In our scene, what we have is a few prefabs to get
you going in our sample project. First, we have our complete setup. Now this is wired with the origin and interaction manager and things like that
that I've talked about. The interaction manager is here that controls all the interactions
between the interactors and interactables
and I'll show
you some examples of how the interactables are
wired up in the scene. We have the event system
which is wired to our UI, and over here you can see our input module and how
you would customize that, so things like point left, click scroll, all of the
things that you would expect to be able to do with the UI, you can do with this input module and it's fully customizable through both our input system with our default input actions as
well as the old input manager. If you're not using
OpenXR, you can turn on the old input
manager and use it that way. On the XR origin itself, we have personally wired it up with all of our
locomotion providers, so things like being able to do
the snap turn, continuous turn, teleport, dynamic movement and all of these have specific things that you
can customize about them. As you build out your own projects, you can see how the
locomotion is wired up. We've wired it to primary 2D access on the controller for the left hand. The other interesting
thing
here is that we've added some additional nuance
to the project, which allows you to
switch the handedness. Being able to switch
locomotion between hands, so maybe you want
teleport on one hand and your standard FPS controls on the other and you
can swap that around. We've got scripts that are set up to walk you through
how to do all of that. In addition to the actual
locomotion providers, we also have just our Standard
Unity character controller, which handles physics and
movement through
the scene. As you can see, the main camera
is what you'd come to expect. It's just a track post driver. We're not doing anything
extra special there. But underneath is this
tunneling vignette, prefab that I talked about earlier. I will expand this so
that you can see here. Then check out the game
for you. You can see here, right now this is with the
vignette effect turned on. You can affect things like
aperture size and these are, things that you pre configure. You can also change them at runti
me, but these are triggered by
the locomotion provider. As you're moving, it will
automatically trigger the ease in effect for the vignette
shroud to come into your vision. You can write your own
locomotion providers and plug them in if you wish. You can also add additional custom effects if
you wanted to on top of just, color and feathering effect
and things like that. Now, on the hands themselves, we have our controllers. This scene is enabled with OpenXR, so we are using our
action based cont
roller. As you can see, as
we go through here, we have the standard
things that you'd see in a track pose driver such
as position rotation. But then we have these
abstracted actions that we call select activity, UI, press, haptic device,
things like that. These are all wired through, again, the new input system. You can see we've categorized
them by hand and then you can customize how each of
these things comes through. Right now we've got the UI press
mapped to the trigger selector, mapped to g
rip and
so on and so forth. You can customize that
however you want. That's the power of the new input system is being
able to have that flexibility. You'll also notice if I jump back
there that all of this is wired through with just the XR
controller reference. We're not referencing
any specific devices, so we're not saying this is only for the HoloLens or the valve
index or anything like that. It's handled, as I talked
about the abstraction layer. All of the devices are managed
the same under
the hood. But if you wanted, for instance, the trigger to be a little less sensitive for valve index
or something like that, then you can come
in here and you can create a new binding for that
and customize it that way. >> In addition to the hand, now we get to the actual
interaction piece of it. That controller hands
off to select and activate actions and things like
that to the interaction manager, which then tells the
interactors what to do. You can see here's our
direct interactor which we'd
use to grab things up close. This is going to be looking
for a select action. Anything where you're grabbing
things or picking it up, it will trigger that state
and then you can activate it, in our scenario by
pulling the trigger. I'll show you an example
of how that's wired up with the Blaster
here in a minute. But the one thing I did
want to show you is that there is the ability
to fire audio events, haptic events, and
additional interactor events on all of these interactors. As you're pickin
g things
up, hovering over things, you can trigger sounds, haptics and any of your other custom code
that you want to wire up. When you're looking at this,
keep in mind that this is all no code. If you have your own scripts, you can drop them in and now
you can just start wiring it up without having to write
any lines of code, which is super cool because it
gets you up pretty quick and easy. I will say that this
sample project has a lot more stuff in it that helps you get up and
running than jus
t the base XRI package itself. Hopefully, in the
next couple of months this project will be
available for all of you to check out. Let's see here. Let me show you a little bit of how the interactors or the
interactables work in the scene. Let's see here. I mentioned the Blaster, so we'll go fly over
to the Blaster here , or the Launcher, sorry. The Launcher is just an
XR Grab Interactable, so it's something you
can grab and pick up. We've got all of
these other options, knobs and dials set up
pu
rposely for the Launcher. One thing that is interesting thing to keep in mind is we have
a attached transform, so this is exactly where your hand will attach when you pick it up. That's custom defined
for any object. As you can see, it will snap
your hand to the handle, so you're not picking it up in
some random spot that you don't actually want it to be attached to. Then if we expand the
interactable events, this is where on activate,
so you're holding it, now you're pulling the trigger or what
ever the activate button is, and it's going to
perform two actions. It's going to set an
animation for the fire state, and then it's also going
to launch a projectile. Again, all of this
stuff is wired up in the editor rather than through code, but you
could do it either way. Last but not least, I wanted to show you briefly the socket interactors
and interactables. Here we have a simple battery, this is another just
Grab Interactable. You can see here we've got
our Grab Interactable. We've added
something that
we call a key chain to it. This is something that we came
up with that allows you to define a specific set of
interactable objects. Instead of just using the
interaction layer mask, we've actually set it up so
that the battery slot will only accept things that are
keyed to the right thing. This is a closed socket, so it only will accept
a specific key. Look at the lock. We see what the required key is, and it's the battery socket key. The battery holds that key so
when you go to
push it in there, it will say, yes, this is the right one.
You can let it go. Then if we look at the
interactor events, you can see what happens when it
actually enters the select state. It will actually turn on the Pearler machine or the
Giant Lite Brite, if you will. It sets that active when the socket has something in
it and when you pull it out, it will actually disable that. Pretty simple way to set up that
simple two way interaction. Then all the rest of the
scene is set up with different
stations that highlight different aspects
of the XRI Toolkit. We have the basic 2D UI over
here and how that's wired up, sliders, buttons, all of that. We have our 3D UI. This is where we showcase some
of our intention filtering. You can see there are some of these controls that are
very close together, so when you enable
these checkboxes up here it actually turns on the
intention filtering and lets you see real time how that impacts what the user sees and
does in the scene, which is cool. These
are also all just
Grab Interactables, but they've been special cased. This is an XR lever. It only works in a certain axis, joysticks, knobs, things like that. Things that we feel are very normal to expect in
an XR experience, things that you want to
actually grab and play with, buttons and sliders,
things like that. Then we have the physics over here. Basic physics interactions,
being able to push open doors, open filing cabinets. This is where we have our
lock and key example. Again, in this
example we use
the same key access where this lock requires the
specific key that will be provided on this key grabable or the key that's sitting here has an actual keychain
object in there. That's it. That's the
project in a nutshell. Hopefully, it will be released
soon for all of you to check out. Now, I hand it back to Doreen to
tell you what's coming up next. >> Thanks, Dave for
the great overview of a project that uses XRI. Let's now talk a little
bit more about what's coming up with XRI 2.
0 and beyond. I first would like
to say thank you to everybody in this
room who uses Unity, who has used XRI, as a lot of the feedback we've
received about the toolkit has gone in to directly
influencing our near-term roadmap. If we take a look at some of the key highlights that we're
really working on today, things like extensibility
improvements, things like controller
interactive refactors with keyboard and mouse simulation
for the device simulation. How do we better interact
with keyboard an
d mice through devices themselves? We've also gotten a lot of feedback on our documentation
and sample projects, how can we make them better? The project we just showed you
today is something we're actively working on and the
hope is that we can release that to you
in the near future. As we go ahead and we
address this feedback, we then take a look at expanding what XRI can
really do for you today. Things like updated interactions, including dynamic attach points, multi-hand support with
multipl
e solver types, and two-handed grabbing and
movement transformations. We're also looking at
locomotion improvements, including teleporter
directionalities and that optional comfort mode vignette sample we talked about earlier on
in the presentation. We've even updated interactors, including a poke interactor. Thanks to all of you who have
asked for a poke interactor. We've heard you we're
adding it and we promise. Also, new UI interaction
improvements, so updated customization
of input actions a
nd input manager settings so it's a
little bit easier for you to use. I will note that some of
this work is already in our 2.1.0 pre-release package, the rest of what's coming up
will be coming in imminently. Think scale of months not years. Finally, we take a
look at what's next. How do we take this expanded set
of features and use it with an even broader
array of inputs? We're looking at things like
revamping controllers and input handling s to take
new forms of feedback. Once we do that, we
c
an start thinking about interacting with
eye gaze and hands, so what does it mean to have
gaze based interactions, additional input for user intention filtering with things
like eyes and fingers. Another interaction is systems
for selection and throwing. Things like, can I look
at a point in space, throw a ball with my hand and
have both forms of input, have the ball go where
the user wants it to go. As we build all of these, we understand that our samples and documentation need to improve to
ac
commodate these new features, and so we'll be updating
our samples and updating our docs as we go ahead and
build all these features out. How do you get started with
XR Interaction Toolkit? We've got this handy-dandy QR code, which if you scan here, will take you to a series of links. First, to our documentation, second, to a set of Unity Learn modules
that showcase XRI, and third, to our detailed roadmap, which showcases the features
we're building with XRI. I will note that we do update this r
oadmap on a
fairly regular basis. This link is the place
to go to check to see what's coming up next with
the XR Interaction Toolkit. I think we are good for questions. Thank you everybody for being here. If you want to raise your hand and we'll answer anything you have. Yes. >> [inaudible]. >> There's a link tree
link in the bottom, if you go there it should work. I will double check on
that QR code to see why it is no longer
working. Thank you. >> [inaudible]. Any other questions? [inaudible]?
>> Yeah. In the editor,
the sample that I showed you that was
triggering the animation, we have a special component that is actually handling
that in between state. We're directly controlling
the 3D model so it's not actually
triggering an animator. >> It's not like it's set to work. >> No, but you could. >> How do you do that? >> Yeah, you could do it that way. We didn't do it that way, but
you could trigger it that way. We just didn't want to set up
the whole animation machine and everything
for
this simple project. >> Got you. >> [inaudible] >> Repeat the question. >> Your question was, is there anything that
can only be set up in the editor that can't
be set up in code? As far as I'm aware,
everything can be done in code that can
be done in the editor. It was done specifically that
way so that people who are building frameworks
on top of it have access to everything
that XRI can do. I can double check on that, but I believe everything
is exposed. Yes. >> [inaudible]? >> The questi
on was, can you build custom events on top of
our existing XRI events? Yes, you can. There's
two ways to do that. We actually have the events exposed, so you can just tie
directly into those, or you can extend the class and you can receive those events
as they come in and pass them off to whatever your backend framework is
that you already have set up. That's actually how
MRTK does a lot of their interaction on top of XRI. The question was, do we plan
to support UI toolkit as well? As soon as th
e UI toolkit
has a world space option, then yes, we're going
to be looking at that. We're waiting on the
UI toolkit team to extend from screen space
into world space so that we can take advantage of it
because it doesn't really make much sense until
that is supported. We're very, very interested in that. >> I will note that this is not the first time we've received
that feedback for that, we are actively
working with the teams at Unity to see how we
can align our roadmaps. >> Can you talk about
[inaudible] >> The question was if I can talk a little bit more about
how MRTK works with XRI. What I can tell you is that, the MRTK3 was built on top of
XRI as a foundational layer. It already if you're
using the new MRTK3, it requires XRI to
function properly. All of the interaction layers, all of that stuff has
been built on top of and it's a good example of what you can do and then how
far you can take it. I will mention that they have their own custom poke interactor
contractor that they've
done while they were waiting for us. >> To do our custom poke interactor. >> Yeah. If you want
a poke interactor, they have some really
cool UI elements and things like that
as well that have been built on top of our UI work as well as their
work with the poke interactor. >> [inaudible] >> Yes. Your question
was essentially what is required by the new system versus the
legacy input manager. The limitation of the
legacy input manager is that you would not
be able to use OpenXR. Our Unity OpenXR
package
utilizes and depends on the new input system
package to deliver events and handle
the input bindings. If you're using OpenXR, it's required to jump over
to the new input system. Now if you aren't using OpenXR, you could use the old
input manager to to handle inputs that way and most of that would just have to
happen through code. It's all hard coded. It not nearly as
powerful or as flexible, but some people have seen that it performs a little bit better
than the new input system. Dependi
ng on your use case, you might have to make
some tough decisions. We're working to streamline that
and get the performance up. Yeah. >> Can you talk about [inaudible] >> That's a good question. XRI does not directly
expose or unwrap the inputs that are dealt with OpenXR and all the other
XR SDKs under the hood. Now, when you import
all of the packages, you do have access to write your own scripts that can
call into maybe it's like a custom extension in
OpenXR that handles a specific thing like
m
aybe the hand interaction profiles for the HoloLens. Those things you can
load and access, you just have to wire
them yourself into XRI. We're working to make that
a little bit more flexible. >> If [inaudible] >> Correct. That's a very good
good point, a very good idea. Do you have a use case that you can think of off
the top of your head? >> [inaudible] >> I can tell you that, if it's exposed through OpenXR, there's generally a way to pipe that through the new input
system and once it's there,
XRI can take it and handle it and do whatever
it needs to do with it. Even even things like haptics and
eye gaze and things like that. If you ever get stuck
with a problem like that, I highly encourage any of you to reach out on the
forums, we're active, looking at the forums,
checking e-mail, looking at bugs at
least once a week. Me and the team, the engineers
are actually actively involved in communicating with the community and trying to answer your
questions and things like that. Hopefully,
you don't feel
like you're being ignored, and if you are you can just make a little more noise and we're
more than happy to help. >> Between the forums and the feedback requests on
our product or website, we do take it fairly active look at what the community
is looking for, so feel free at any time to reach
out to any of those avenues. We really look forward
to your feedback. >> Any other questions, thoughts? >> [inaudible] >> Your question was, is there
any existing support for, say, hand-trac
king or hand
poses or anything like that? Currently, we do not have
any native hand support. We're actively working on it. I do know that the hand
interaction profiles that come from the HoloLens and some of the
other gesture support exposed through the AR Foundation
are accessible through XRI or through the input
system in general. But just being able to track individual digits
and things like that, that's coming in the future. >> [inaudible] >> Yeah. If you wanted to do it
that way or if you w
anted to map a pinch with the hand
or something to XRI. >> Any other questions? Well, thank you very much for the time. We appreciate you being here. [MUSIC]
Comments
Here at the timestamps: 01:15 What XR toolkit 02:13 Demo: What is possible with XRI 03:51 What is included in XRI 04:55 XRI Core Interaction Component 08:07 capabilities - locomotion 12:40 XRI + AR Foundation 14:30 Demo Building Apps with XRI 25:55 XRI 2.0 and beyond But where is the sample project?
Any chance to see pico integrated in this?
Is that first demo project available to download (source) somewhere?
The QR code at the end of the demo is still inactive.
Please update MicrosoftOpenXR plugin for Unreal