Collaboration with NVIDIA Omniverse
Michael Kass, senior distinguished engineer at NVIDIA and the overall software architect of NVIDIA Omniverse, joins hosts Patrick Cozzi (Cesium) and Marc Petit (Epic Games) on Building the Open Metaverse to discuss collaboration with NVIDIA Omniverse.
Guests
Listen
Subscribe
Watch
Read
Announcer:
Today on Building the Open Metaverse.
Michael Kass:
Our idea of the metaverse is that you're going to have some description of the virtual world, which we think the right way to do that is USD. We think that's the way to connect all the metaverses together. And then having done that, you want to visualize it in a variety of different ways. So the science fiction literature is all about virtual reality, or augmented reality; but there are people who are going to want to interact with these worlds through traditional desktop screens or tablets or anything else, and we want to support it all.
Speaker 1:
Welcome to Building the Open Metaverse, where technology experts discuss how the community is building the open metaverse together, hosted by Patrick Cozzi, from Cesium, and Marc Petit from Epic Games.
Marc Petit:
All right. Hello everyone, and welcome to our show, Building the Open Metaverse. It's the podcast where technologists share their insight on how the community is building the metaverse together. And today, we have the extreme pleasure to welcome Michael Kass, from NVIDIA. Michael, welcome to the show.
Michael Kass:
Oh, thank you very much.
Marc Petit:
And of course, I'm here with my partner in crime, Patrick Cozzi. Patrick, how are you?
Patrick Cozzi:
How are you, Michael? Doing great, Marc.
Marc Petit:
So, Michael, I think we need to introduce you, because you're a pretty unique guest to us. I was mentioning before that I learned from your paper at school. So for me, it's a moment to meet someone like you. I remember his paper on space/time constraint in the late eighties; and you were a senior distinguished engineer in that video. But before we get there, I want to talk about your background, because it's also pretty unique, or your kind of extracurricular activity. Not only you won an Academy Award; thank you again for that, during your tenure at Pixar. Congratulations. But, tell us about these challenges you put upon yourself: competitive ice skating, ballroom dancing, juggling.
Michael Kass:
Yeah. I like to take my hobbies seriously. Sometimes the better you get at something, the more fun it is. So, I do these things for fun; but I like to see if I can get the best instruction that I can, and do whatever I can to improve it.
Marc Petit:
Great principle; and what a very competitive aspect to it. Right? You like competition?
Michael Kass:
Well, I just... I like... I do these things for fun; and the better I get, the more fun it is. And sometimes, as you know, in a lot of areas of life, if there's no deadline, nothing happens. So I enjoy competitions more, because of the fact that they force you to get yourself ready and prepare. And sometimes it lights a fire under your butt that you need, to make sure you're not going to embarrass yourself. So, it really comes from that.
Marc Petit:
Yeah. And then, your achievements are pretty impressive. You're the software architect for NVIDIA Omniverse, which is one of the big projects in the industry. You had very high level positions in Intel, Magic Leap, Pixar for many, many years, Apple. Education at MIT, Princeton, and Stanford; an ACM Fellow. So, a very, very... One of the founding, one of the key contributors to where CG is today. So I, 30 years later, it's good to see CG becoming the center of the world; and you've contributed a lot to that. So thanks. Thanks for that.
Michael Kass:
Well, thank you. Thank you very much.
Marc Petit:
All right. So, tell us a little bit of your journey to the metaverse.
Michael Kass:
Yeah. Well, I've come at this from, I guess, a variety of different angles. Earlier in my career, I worked in computer vision; so, trying to look at the world, understand it, create models of it. And then, from the other direction, changing over to computer graphics; so, starting with models of the world, and trying to figure out how to visualize them, and how to deal with models of great complexity. And then, after 18 years at Pixar, I, as you mentioned, I spent a couple years at Magic Leap, looking at augmented reality, and at Intel, looking at augmented and virtual reality. So these are all different aspects of what I think people are converging on, for the metaverse.
Patrick Cozzi:
Yeah. So Michael, could we jump in, and tell us about NVIDIA Omniverse, and your role there?
Michael Kass:
Sure, sure. So, the origin of Omniverse basically, is that Rev and I joined forces; we had visions of where it should go; Rev Lebaredian, the VP of Simulation Technology at NVIDIA. And the fundamental problem that we were given by Jensen (Huang), is to figure out a way to connect all of the existing little bits of technology at NVIDIA. And he would see one very interesting piece of technology, and another very interesting piece of technology; and would ask for them to speak to each other. And the inevitable answer is, "Yeah, yeah. We could do that, but it'll probably take about three man months."
Michael Kass:
And so, the first idea was, "Hey, maybe we can make use of existing game engine technology." And we ultimately found out that what we wanted was something pretty different. We wanted a really universal and modular way for all these different bits of graphics technology to interact, to talk to each other. And when we conversed, there was this idea of having a single, sort of central source of truth about a virtual world, connect you to a variety of clients that could examine that virtual world, that could change it, that could then put the results back.
Michael Kass:
And we decided on essentially a published subscribe model; that you would subscribe to whatever parts of the world you care about, and then, whatever you wanted to change, you would publish that to a central place that everybody could see consistently. And we chose Pixar's USD, Universal Scene Description as the basis to describe it all. Because this is something which evolved over decades, and was designed by Pixar over the years, to have the full set of features that are necessary to describe virtual worlds on the complexity that are used in feature films. And that has been boiled down through that experience to just an awesome way to express what there is in a virtual world, and with the richness that you need for authoring it, for constructing it. And Pixar has been kind enough to open source that to the rest of the world. And we're very happy to jump on it fairly early, and see its promise.
Marc Petit:
So you mentioned scale; and I heard Jensen during the GTC Keynotes, that Omniverse is a graphic simulation solution, the data center scale. So can you tell us what data center scale means, really? And how big can it go?
Michael Kass:
Right. So, of course, we're exploring that now; we're trying to push it as far as we can. But it's a different model than the model that drove most game engines. So, most game engines assume that they have to run locally on every client; and that client can be as small as a Game Boy or a phone. And it has to be able to render everything to, for example, simulate the physics inside of that very limited amount of resources.
Michael Kass:
And we designed Omniverse from a different perspective. Our perspective was, "Well, if you move to something which is Cloud-centric, then you can afford to do anything that's view-independent in the data center." And you guys have probably seen GFN, GeForce Now. We recently announced that we have 14 million people using it.
Michael Kass:
And what we've managed to prove, is that you can deal with the latencies. So we have data centers all around the world. And as soon as you've solved that problem, now it's possible to move more and more, almost everything into the Cloud. And once you do that, the way you want to design, the way you want to put together a shared 3D experience, is very different. And that's the philosophy of Omniverse.
Michael Kass:
So, in a typical game, for example, you can afford to spend perhaps 15% of your total computational resources on physics. And that means you can spend 15% of the Min Spec Node. But if you move everything into the data center, let's say you have a hundred users, you can still afford to spend 15% of the total resources on simulation. But now, instead of 15% of one node, that's 15 entire nodes. So we're looking at trying to expand Omniverse to where you can use, as you mentioned, the whole data center.
Michael Kass:
As you know, NVIDIA purchased Mellanox. So we have... We now own their InfiniBand connectivity. We have now very high speed connections among our GPUs in our DGX products. And so, we're looking to take advantage of all those assets that we have, to create really, really astounding scaled out simulations and experiences in the future.
Patrick Cozzi:
So, Michael... Yeah, a few episodes ago, Michael, we were talking about hybrid architectures, where some things may run; at the Edge on the client, and some things may run into Cloud and the data center. Could you just expand a little bit on what you were just talking about, on how computation may get distributed across all those resources?
Michael Kass:
Yeah. So, we want to basically have every piece of the computation go on whatever is the most appropriate machine. And so, if you're doing VR/AR, you want absolute minimum latency. So, you want that to happen as close to you as possible. Well, it could either happen on your device, but if it's higher quality yet, our experience is good with Edge connectivity. And then, so that gives you a way to make sure that everything is responsive enough; that the VR/AR experience doesn't have latency that is bothersome. But then, the rest of the computation can happen further away, in some kind of data center.
Michael Kass:
And maybe there's some simulation which requires a collection of machines that need to be very, very tightly coupled, and actually need to be physically right next to each other. And that could happen in a single central data center, if necessary. So, we are trying to make every part of Omniverse as modular as possible, so that it can support all the different needs, depending on what it is that you're trying to do.
Marc Petit:
So you guys have this CloudXR technology, which I think promises data center based or Cloud-based VR. Do you think that's something which we have line of sight on deployment in the Enterprise? I'm not talking about consumer, because that's the last mile of the problem. But do you think we can have both collaborative VR and Enterprise using those technologies?
Michael Kass:
Absolutely. We have that working at NVIDIA today. So, we have the ability to do AR with tablets. So, we use the APIs on the tablet that figure out where they are, where they're looking. We use that to export a camera in USD into our nucleus server, which describes the entire virtual world. And then, anybody can subscribe to that camera, do the render, and that render can be streamed out to the tablet or to somebody in VR. And it actually worked quite well.
So, our idea of the metaverse is that you're going to have some description of the virtual world, which we think the right way to do that is USD. We think that's the way they connect all the metaverses together. And then, having done that, you want to visualize it in a variety of different ways. So, the science fiction literature is all about virtual reality or augmented reality, but there are people who are going to want to interact with these worlds through traditional desktop screens or tablets or anything else. And we want to support it all. And our vision of the metaverse is also why much wider than just these social interactions.
Michael Kass:
Right now so much of what we do involves information that we get from the web or transactions, business that we do through the web. And a very great deal of that would benefit from being in 3D. So, we really see the metaverses also being like a 3D web done right. And something that allows anybody to author experiences that could be particular to your business or your social group or whatever it is that you're trying to accomplish and that all these things connect through these portals into virtual 3D worlds.
Marc Petit:
I think that seems to be very focused on AI and describing sometimes Omniverse as a place when robots learns how to be robots. How do you, being a computer vision expert, what's your take on that synthetic data framework and how important it is to train the next generation of machines and what role does Omniverse want to play in there?
Michael Kass:
Yeah, so early on, I think I was a bit of a skeptic about some of the deep learning work and I thought a lot of it looked a bit over-hyped, but I've been really how humbled by what people have achieved with it. It's very clear that their extraordinary results that have already come out of what deep learning has made possible and it's only accelerating. So, what are the challenges? One of the biggest challenges is how do you get proper training data? And yeah, you can hire thousands of people to run simple apps and to label things, but that only gets you so far. And if you can create things synthetically, then you know exactly what you have. And now you don't have to worry about whether your humans were a bit lazy and made some errors in what they labeled or whether maybe it's too laborious for them to label things on a pixel by pixel basis.
Michael Kass:
And so, we've seen great results with domain randomization, synthetic data generation to create data sets to train these networks and the more data you give them, the better they are and the more accurate the data is, the better they are. So, we now have with Omniverse, the RTX renderer does astounding quality, real time path tracing, or give it a few seconds and the quality is even more mind blowing. So, putting all that together with modern techniques for training these networks, and it's just astounding what we've already achieved and I'm waiting with bated breath to see what's going to happen in the next year or two years, five years, I think it's going to be extraordinary.
Marc Petit:
So, I mean, and we talk about USD every episode of that podcast. So, it's fascinating. And of course we understand that USD offers a pretty set representation of the world. And we know that it is going to be about exchanging fully simulated worlds. So, what is your vision and how does Nvidia approaches that grand objective?
Michael Kass:
Right. So, as you know, USD was developed by Pixar and obviously they had their own needs at the front of their minds. And so, one of the things that it's really good at is dealing with very complex environments. What they haven't really needed in the past is rapid updates, rapid changes. They traditionally would have animators create scenes and they'd like to visualize those scenes, but they didn't have the kinds of needs that people do in the game world where you want instant updates for things that can be quite complicated. So, we've been spending a lot of time figuring out how to cut fast pass through the USD library so that we can maintain its flexibility and still achieve the kind of update rates that people need for interactive experiences. Essentially, on the film side of things people want absolute full flexibility. And on the game side of the world, people want absolute maximum performance, they're not willing to sacrifice one drop of performance.
Michael Kass:
And so, what we've been doing is to try to see how... We really think that flexibility is important and we want to maintain that. And we don't want to have a world where you have to bake things overnight before you can experience a virtual world. And so, we've been looking at USD and cutting paths through it so that things can go very, very quickly. And you saw some of that at GTC, we showed what our drive simulator is doing, and it's all based on USD and based on these fast paths that we've created and that we are going to expose as soon as we can from Omniverse so that everybody can build high speed, interactive experiences. And our drive simulator actually is working with hardware in the loop so it's essentially virtual reality for the car. We generate what it sensors have as if it's for real in this virtual world and with real timing. And we're doing that all with USD, which many people didn't think was possible.
Patrick Cozzi:
Michael, do you have any examples? Is this something like pre triangulating geometry or compiling shaders?
Michael Kass:
Well, when we started building on USD, we had some engineers who looked at it and said, "Look, I just changed this attribute over here and look at my call stack. This is never going to work." And we had to come back and say, "No, no, really, we're going to fix the things that are problematic. We're going to find ways around this." And one after another, we've been getting rid of bottlenecks in the USD library. So, one of the things that's central to USD and the reason that it's different from everything else is its notion of composition. That it's not simply, you put a value here and that's the end. It has a notion of layers and that allows a bunch of people to work separately on layers. And then, those layers are composed. So, that composition logic can sometimes make it slow. And the way that that composition makes it way through the render, and we've just been attacking all those bottlenecks everywhere we find them and-
Patrick Cozzi:
I see, so Michael, you're talking more about optimizing the USD library, not necessarily changes to the specification?
Michael Kass:
That's right. We're devoted to the idea of being able to look at any valid USD in Omniverse and exporting to everything else valid USD because a central principle of Omniverse is universal interoperability. So, we connect as many applications as we can with Omniverse. And all they have to do is read and write USD. But if you want maximum performance, then we're working with Pixar as much as we can to make that standard. And where it's not standard, then we'll figure out other ways to distribute that with libraries on top of USD.
Marc Petit:
So, you talk about universal interoperability. So, how do we achieve that for like procedural components? Whether it's hair grooms and crowds and visual effects, or more sophisticated, or just physics in the simulation space. So, what is your take on achieving that level of interoperability?
Michael Kass:
Right. So one way to do that is what I originally described, which is you have this virtual world and you can look at some part of it. You subscribe to some part of it. You do some computation that adjusts the virtual world. You put it back. If that's fast enough, then you now have something which you can plug in anywhere. So, let's say that you have a character rig in Maya and maybe Maya's the only application that understands it. Well, you could have a copy of Maya connected to the Omniverse and somebody can change a joint angle or something, any application can do that, and then, Maya sees, oh, this joint angle has changed and then its rig executes, it reposts the points, those points go back into Omniverse. Now, everybody sees it. So, there's this kind of universality that way.
Michael Kass:
One of the interesting experiences we had was when we first connected up Revit to other applications, I think Maya was probably the first one, through Omniverse. And Revit has a constraint engine. So, if you move a door too close to a window, maybe that is going to say, "Oh, that's too close. It's going to move the window over there." That constraint engine doesn't exist in Maya, but if you have a live version of Revit and a live version of Maya subscribed to the same resources inside of Omniverse, then the Maya user can move a door to a place where now it's too close to the window. Then, Revit sees that, it's subscribed, it says, "Wait, wait, wait, wait, no. That can't work." And then, it moves the window far enough away. And so, to the Maya user, it appears as if the Revit constraint engine has been embedded in Maya.
Michael Kass:
And that's one way you can get this kind of procedural activity from no matter what source as long as it reads and writes USD it can now interact with all the other parts of the Omniverse ecosystem. In addition to that, we want to make it possible for people to author conveniently in a portable way, some behavior. So, we think of USD as essentially the HTML of 3D worlds, the HTML of the metaverse. And it's essentially purely declarative. It doesn't tell you do step A, do step B, do step C. It says, "Hey, here's a world. There's a sphere over here. There's a light over here. Et cetera." It tells you what's in the world, but it doesn't really tell you how it operates. And we supplemented that with descriptions. For example, we worked with Pixar and with Apple to create a rigid body schema. And so now you can take that world and you can say, okay, these objects have a joint that connect them. And here's something that has a coefficient of restitution; so if you drop it, you understand how high it's going to bounce. And this is some collision geometry that's simplified for this object, et cetera. And now you've sort of defined a bit about the behavior, because now if it all follows the rules of physics, then you can use our open source physics engine, PhysX, or you can use other people's engines in there. And that gives you a certain amount of behavior. But there's other kinds of behavior that's not very well described by just putting properties on things, you want to be actually procedural.
Michael Kass:
You want to say, okay, when you press this button I want this or that thing to happen. And for that, we've developed a feature in Omniverse we call OmniGraph. And OmniGraph allows you to connect computational nodes together, to wire them as a flow graph and create, essentially, screen event simulations so that you can attach to your 3G world some behavior. And then all of that is described in USD so that you can have a single place where you've got an open standard description of what is there and how does it behave.
Marc Petit:
Fascinating. So that would help standardize the authoring and the playback. In one of the previous episodes we were making the case of the complementary of USD and other form, I think glTF, if you know, was the idea. Which I think [inaudible 00:28:04] supported during our [inaudible 00:28:06], both you remember, Patrick, where I think the energy was USD is the PSD, and glTF is the JPEG. Or USD is the DOCX, and glTF is the PDF. How do we get about transporting those simulated worlds and those smarter objects? Do you think, as an industry, we can come to terms rationalizing transform mechanism, extending glTF? What is your take on that? Or is USD's solution to transport the content as well?
Michael Kass:
Yeah, we like USD as the overall transport. We think that there will be situations where you want to compress things to something smaller, and glTF is good at that. We're building a USD file format plugin so that you can reference directly glTF out of USD. But then having done that, you can now use the power of USD to do overrides on it. So you can publish a glTF asset, and maybe I want to change material someplace, or maybe I want to take a whole collection of them and make them part of a class. And I want to have one place where I change material and it infects them all.
Michael Kass:
Or then, maybe I want to take all of that and I want to instance it 47 times, but I then want to customize one of those instances, et cetera. And then I want to make them all physically simulatable rigid bodies, and then I want put that behavior on, and then maybe some other behavior on. So we think USD is the way to do that. And if the base objects come in as glTF, or if they're full USD, we're agnostic to that.
Patrick Cozzi:
Michael, this is for the USD library, you'll be able to import glTF, and then it will go into, essentially, the runtime data model that you're manipulating through USD, is that right?
Michael Kass:
Yeah. I don't know if you guys have taken a look at the ways that USD can be extended, and there are many, many of these, one of which is with these file format plugins. And the way these file format plugins work, is that you point USD at some particular file type. It says, okay, I'm going to ask this file format plugin to make sense of it. The file format plugin, it looks at the asset. And then based on what it sees inside of the asset, it makes whatever calls it wants to the USD library to create the corresponding elements of the scene graph in a way that USD fully understands.
Michael Kass:
And this is a very, very, very flexible method. For example, there's a Draco plugin that's part of the USD distribution now, and that plugin reads a standard Draco profile with compressed meshes. It looks at that, it decompresses it, it then creates the full USD representation. And then nothing using that has to understand anything about the fact that it was compressed, it looks like pure native USD afterwards. And similarly for glTF, so we're building a plugin that can read glTF and create all of the corresponding data structures inside of the USD scene graph. And then anybody using that can't tell the difference between it and an asset that was brought in as native USD.
Marc Petit:
It make a lot of sense. One last question about Omniverse, which is a question that comes back often, maybe not your favorite question. But, what's the portability of Omniverse, and any plans to go beyond the Nvidia hardware platforms?
Michael Kass:
Oh, no, I like this question. There's no problem with that at all. We really want Omniverse to be universal, and so we are making it connecting up hydro renderers to Omniverse. And we've reached out to all the major vendors of renderers. We're connecting up to HdStorm, the renderer that comes out of the box from Pixar as part of the USD distribution. And, yeah, I mean, we welcome people on all play platforms to provide ways for us to connect through Hydra to whatever their favorite renderer is. And we're also looking at, there's a project now that we're supporting to create a web assembly version of USD so that it can run inside of a browser. And that web assembly has been connected to Filament so that it can render inside a browser and everywhere.
Michael Kass:
We want the data inside of Omniverse, all the data representations to be open and public. If you bring your data into Omniverse or into the things that connect to Omniverse, we have no interest in owning your data. We want it to freely move in and out. We believe firmly in the value that we've gotten out of web standards, that anybody can connect any website to any other website, and these data transports. So we're firmly committed to that. All the data representations will be open.
Marc Petit:
That's fantastic to hear. Thank you. What about Omnigraph, is this something that is conceived as an open source project, or is it something that Nvidia wants to keep in the proprietary domain?
Michael Kass:
Yeah. We're devoted to, it's open standards. For example, Hydra is an open standard, and there are a variety of different renderers that take that data and turn them into pixels in their favorite ways on their free favorite platforms. That the representation of things in OmniGraph is USD, so there's nothing secret about, you can author something in Omnigraph and you can look at the USD that is created there. And we're not blocking anybody from taking that and interpreting it with different code.
Patrick Cozzi:
Yeah. Michael, I was hoping to switch gears a little bit to go super geek and ask you questions close to home for me. As you may know, I taught CG programming architecture for many years at University of Pennsylvania with our mutual friend Norm Badler. And as part of the interview for that role, I had to give a 90 minute lecture. And I chose to give that lecture on everything I could possibly learn about the Z-buffer, which included some of your work on the hierarchical Z.
Patrick Cozzi:
And just one thing I found fascinating was when the Z-buffer algorithm was first thought about in the 70s, I believe, as a footnote in a survey paper that later finally named in a thesis. It was kind of written off as brute force and ridiculously expensive. But then over the years the hardware changed, new algorithms were discovered, such as yours. And now it's become very widely used. And I was just curious, if you can just talk a bit about the graphics field, whether it's the Z-buffer or ray tracing, and how things go in and out of vogue based on the technology shifts.
Michael Kass:
Right. I remember what you're talking about. I mean, I've spoken to people who were, like Ed, who were there in the early days in Utah. And the frame buffer was a full rack. And it cost, I think, over $100,000. And that was what you needed to you even display something. So a Z-buffer on top of that was very, very costly. And I remember a day when I saw a chip with a Z-buffer in it that was on a little add-on card and it cost pennies. So it's really astounding. The origin of the hierarchical Z-buffer, this may be interesting to people in terms of scaling the Metaverse. Ned Green and I were trying to figure out how you could render ridiculously complex environments.
Michael Kass:
And they were possibly procedurally done, and render was absolute correctness. In time, proportional to what you see, not proportional to the size of the entire universe that you've modeled. And our example was, suppose you have the Empire State Building, and you have around 100 floors. And each one has, I don't know, maybe hundreds of offices. Then each one of those offices has a variety of things on the desk, including a bunch of paper clips even. And now when we wanted to look through a window into this thing, and if there was a paper clip there you would see it correctly. So the total number of primitives of triangles for the model we were looking at was something insane. It was trillions, or something. And there's no way that you could possibly have processed each one, even looked at each one of those in order to make a picture in a sensible amount of time at that period.
Michael Kass:
So our idea was this, we were going to take advantage of the fact that most things are occluded most of the time. And we took the whole world and we divided it into octree cubes and started rendering front to back. And the idea is that if you have an octree cube, you try to prove that the whole cube is invisible. So if every pixel in every slide of that octree cube is behind the closest thing in your Z-buffer, and those things are opaque. Then essentially you've proven that the cube and everything inside of it is invisible and you don't have to open up that cube. And that's what allowed us to basically make it possible to look at the Empire State Building from the outside and through windows, see everything down to a paper clip and solve that accurately in time proportional to what you can actually see and not proportional to the whole Empire State Building.
Michael Kass:
Today, with rate tracing we have an even better solution. We thought for a long time... I mean, I didn't think 15, 20 years ago that we'd be talking about the kinds of rate tracing speeds that we have today. And not only that, but it's the AID noises that are really making it possible to take advantage of that rate tracing because even as fast as ridiculously fast as our rate tracing hardware is, without those denoisers, you would need way too many rays still to make it practical.
Michael Kass:
But the degree to which they improved the denoising situation is really astounding. And even with one ray per pixel, you can get some remarkably good quality images. So now we are the main render for omniverses the RTX render led by [inaudible 00:40:30] and his team has put together something with performance that is so good, we don't even have a need to rasterize the primary rays. And by not rasterizing, the primary rays, that means that the rendering performance, it doesn't depend on the amount of geometry that you have loaded. So with a good acceleration structure, the collisions, the ray intersections are proportional to the log of the size of whatever it is that you have loaded.
Michael Kass:
And to first order the log of the size of the universe is a constant. So that basically doesn't matter. And as a result, you can put huge amounts of geometry into this renderer and it doesn't really slow down very much. So we have been working as you know with BMW on their factory of the future digital twin. And they have been really surprised at how much complexity you can load into this renderer without making it choke. So we're putting more and more and more detail into the digital twin of these factories. And it's really changing the way they're doing business.
Marc Petit:
And it seems [inaudible 00:41:57] to scalability across multiple GPUs, right?
Michael Kass:
Yes. So the traditional game engines are mostly concerned with making sure that they work everywhere. And as a GPU manufacturer, we're of course more interested in scaling up. If it's worth it to you to spend what it takes to get an even higher quality image faster, we want to solve that problem for you. And the RTX renderer has very good scaling. So we think this is a game changer in the future because you can get NVIDIA GPUs from cloud service providers at less than a dollar an hour. So if you want a hundred of them for an hour, that's not a ridiculous amount of money to be spending for many kinds of purposes. And we think that's a very attractive option and that's what we're hearing from our customers.
Patrick Cozzi:
So, Michael one last question from me, which is you've made contributions over so many different areas and it's found success, in things from computer vision to computer graphics, both real time and in the movies and all your work with dancing and juggling and ice skating. I have to believe that you have a formula for success, and I'd love to ask, I mean, for me personally, but also for anyone who might want to be breaking into the field of the metaverse. I mean, what advice do you have?
Michael Kass:
I find a lot of people have difficulties when they start to get outside their comfort zone. And kids are used to being bad at things. Whatever they try at first, they don't do it very well. But adults have a tendency to fall into their comfort zone. And then when they try something new, if they don't succeed immediately, they think, oh, that means I'm stupid or I don't have aptitude for this thing or whatever it is.
Michael Kass:
And that's the most common barrier I see to people achieving what they could otherwise do. So don't be hard on yourself. Just expect to be really bad at the new thing when you start it. And don't worry about that. People spend so much of their intellectual and their emotional energy worrying about whether they'll ever succeed at whatever it is or whether they're being ridiculous and whether people think about them and if you just avoid that, don't worry about that and just pay attention to what you want to learn and clear your mind of those other thoughts, then it's remarkable how much more progress you can make. So just follow your interest, do it, don't worry, and you'll make a lot more progress than you ever thought you could.
Patrick Cozzi:
That's sound advice.
Marc Petit:
Thanks for that. So our last two questions, which are routine for our podcast is first we'd like to make sure we hear from you, but the topics that you think we should have covered or should cover in the future in this podcast, what is top of mind for you as we think about the metaverse?
Michael Kass:
Oh, I think we covered most of those topics. For me, I think it's really important for it to take off that it's about open standards like we've been talking about. This is way too big an idea for any one company to try to own and it's way too big, there are too many moving pieces. So I'm simultaneously kind of impressed at how much progress we've made so far and humbled by how much remains to get to the vision that we have. And the only way we can get to where we want to get to is by harnessing the efforts of so many different people and organizations and putting them all together.
Michael Kass:
And so I think the way those pieces come together is really the key, the key to all this. I, a couple years ago, describe what we were doing in omniverse to a VP of research at a major tech company in the Valley. And he said, well, that just sounds like plumbing. And in a sense, he was right. So much of it is plumbing, but the plumbing is critical. I mean, plumbing is what makes civilization possible. And you put the right plumbing there and now you've got the foundation for something pretty awesome. So I think that's not what the press usually gets excited about. It's the more mundane, underlying things though that actually enable the progress and enable the interconnectivity and will get us to where we want to go.
Marc Petit:
I agree. And it's relevant for our audience if technology's trying to lay this foundation. So in the spirit of the importance open platform, is there anybody, any person, any individual in the organization that you'd want to give a shoutout to whether somebody from the past or somebody that you think could play a big role, someone in organization?
Michael Kass:
Well, I just want to, for sure, thank Pixar for their decision to open source USD and for their efforts in supporting the community and the whole ecosystem. We have benefited enormously from it. And we would never have been able to make the amount of progress that we have with omniverse without standing on that foundation.
Marc Petit:
Well, we know [inaudible 00:48:10] is going to be with us in a few weeks, so we'll pass the message onto to him because I think he played a big role at Pixar.
Michael Kass:
Absolutely.
Marc Petit:
When this happened, so. Well, Michael, it was fantastic. Thank you so much for your insight as a principal architect of omniverse. It's great to hear you talk about it and that focus on open standards is something that we really appreciate and we commend you and NVIDIA for supporting.
Michael Kass:
My pleasure. Thank you so much.
Marc Petit:
So Patrick, it's time to thank our audience. Looks like more and more people are enjoying us geeking out so this is great. So thank you everybody for listening to Patrick and us. Remember what you have to do in those case, subscribe, rate, review, share. We are in that business now of asking for that social component. I really enjoy being with you Michael today. Thank you, Patrick.
Patrick Cozzi:
Yeah. Thank you, Michael, for joining us.
Michael Kass:
My pleasure.
Marc Petit:
Yeah, and we'll be back in a few weeks, a couple of weeks. Thank you everybody.