Rendering Revolutions: Chaos founder Vlado Koylazov's Journey from V-Ray to Virtual Production
Vlado Koylazov, co-founder of Chaos and creator of V-Ray, discusses his journey in computer graphics, the development of V-Ray and Chaos Vantage, challenges in real-time rendering, and the impact of AI on the industry. He shares insights on virtual production, USD adoption, and the future of rendering technology.
Guests
Listen
Subscribe
Watch
Read
Announcer:
Today on Building the Open Metaverse
Vlado Koylazov:
My goal was to just make the users’ lives easier. We realized that all users just want their image out. They don't necessarily have the time to go deep into rendering.
Announcer:
Welcome to Building the Open Metaverse where technology experts discuss how the community is building the open metaverse together, hosted by Patrick Cozzi and Mark Petit.
Marc Petit:
Hello, everybody, and welcome back to Building the Open Metaverse, season six, the podcast that showcases the community of artists, developers, researchers, executives, and entrepreneurs who are building the internet of tomorrow.
My name is Mark Petit, and my co-host is Patrick Cozzi. Patrick, how are you?
Patrick Cozzi:
Hey, Mark. Doing great.
As you know, I’m a bit of a graphics geek. Within graphics, my favorite subfield is rendering, which is why I'm so excited that joining us today is a pioneering figure in the computer graphics industry, the co-founder and head of innovation at Chaos.
He's the mastermind behind V-Ray and Chaos Vantage. Vlado Koylazov, welcome.
Vlado Koylazov:
Thank you. Thank you, Patrick. Thank you, Mark. Thanks for inviting me. It's an honor to be here. I've known Mark for a long time, so it's a good opportunity to catch up.
Marc Petit:
Today, we'll be discussing your incredible journey, the innovation you've brought to the industry, and what's next for Chaos. As you know, as is customary on the podcast, we would like to start by discussing your journey to the metaverse.
Vlado Koylazov:
My very first impressions of computer graphics were when I was a kid, I don't know, maybe fourth or fifth grade. I had this book about computer graphics. It had explanations of what ray tracing is. It had some images of glass spheres and all kinds of things that look really, really interesting to me, and it discussed the original Tron movie and the effects that they did for that film. And I thought that was fascinating, and I thought that that's something that I want to do at some point, maybe.
Llater on, things like games and 3D games started happening. There was Doom, and there was Quake, and I realized that I was not really that interested in the game itself, but rather in the technology behind it, how it was done, and having this idea of a virtual world where you can go and visit with other people at the same time.
It was fascinating, so I wanted to figure out how this is done and how it works. I guess that's how it started.
Marc Petit:
So, you started as a production company, if I remember well.
What then got you to develop V-Ray back at the time?
Vlado Koylazov:
Originally, before I joined, the company was doing production. I think one of the other co-founders at that time, Peter Mitev, wanted to go into software development because he was a developer. He asked me if I wanted to work on some projects. After some back and forth, I decided, "Okay, let's see what we can do."
The first thing we did was this little plugin for 3ds Max called Phoenix. It's not the Phoenix that we have today. It was a different plugin. It was still designed to create fire and smoke effects, but it was really, really simple. It was a moderate success, I guess, for us back then. Even just seeing something that you wrote being sold and used by other people was super exciting for us.
After that, we started working on V-Ray around the time that other renders started opening up. Brazil, there finalRender; there was Arnold, that very first version of Arnold.
I thought this was really, really interesting, and I wanted to see if that's something that we can also work on. V-Ray started, and eventually, it overshadowed the rest of the business.
Patrick Cozzi:
I believe V-Ray 1.0 came out about 2002, after five years in development. Could you share some of the key challenges during that period?
Vlado Koylazov:
I showed it to some people at SIGGRAPH 2001.
Our first reseller was a company called Digimation. There was this guy there, Paul Preschel; maybe your best names would remember him. Yes, he later started TurboSquid, but we showed V-Ray to them. He said, "This looks really cool. Maybe you should turn this into a product." So that's what we did. We worked on it a little bit more, and then we released it as a product.
Patrick Cozzi:
V-Ray 1.0, I mean, it had ray tracing, it had global Illumination, it had shaders. In hindsight, of those features, what do you think made the biggest difference?
Vlado Koylazov:
It was a pure ray tracer. We decided early on that we didn't really want to do any hybrid rasterization/ray-tracing thing. It was going to be ray tracing all the way. And I think that proved crucial because it allowed us to render a lot more complicated scenes than we would have been able to with the ray tracer.
The other thing that turned out to be really important at that time was that we had actual 3D motion blur right from the start, which was quite difficult—and it still is. It's not the easiest problem to solve, and we spent quite some time trying to figure it out. That was also crucial for V-Ray.
We decided early on that we would have physical lights and physical materials, partly because this made it possible to make optimizations that otherwise would not have been possible. V-Ray's biggest advantage was that it had these different global elimination engines that it could combine in different ways. So you could use brute force or a combination of brute force and something else, like a photo map or a white cache.
This combination of techniques allowed V-Ray to be pretty flexible. It turned out that our customers actually liked that. They like to make different trade-offs between performance and quality. Later, I realized that this decision to give them the choice was actually pretty important.
Marc Petit:
I took over the 3ds Max team in 2002, and we had this partnership with Mental Images and Mental Ray inside the package. So I was like, "Okay, fine." I knew that because we had Mental and SoftImage as well, and so I knew that and then started to work on that. Then you guys showed up with V-Ray, and it was like, okay, well, the V-Ray shader seemed to be physically the correct energy; it was way ahead of its time.
The shock came in 2005 with V-Ray 1.5 when you guys started to introduce all of those new features, like light portals. Remember that? Then, the sun/sky system. You made it absolutely easy for accessory visualization something. For me, I had to fight. I'm not going to name names, but it was incredible how that approach at the time was completely new and almost revolutionary because it was really about building inside the render, the trade-off so that it becomes very easy for the users to do stuff.
We were desperate, and we were trying to discuss with our friends at Mental Images, "Can we do that in Mental?" And then, for some reason, there was a mental block. "No, no, it's the shaders; the user has to take care of that." So I think it was, for me, at that moment, we probably realized that V-Ray would win the world of architectural visualization forever because of that ease of use that you brought.
That innovation was absolutely fantastic and way ahead of its time. Do you recall the story this way?
Vlado Koylazov:
I was worried a little bit at some point, but at the same time, competition is good. It always keeps you on your toes, so you always try to make things better. Yeah, from that point of view, it was probably for the best. But, to your point, I think very early on, my goal was to make the user's lives easier.
We noticed that they spend a lot of time not just on the rendering process itself but also on building their scenes. Anything that actually helps them get to an image faster is valuable.
This didn't necessarily have to be strictly rendering technology like you mentioned, the sun and sky system. So, these are just procedural direct light and procedural environmental texture. But users really need these tools to build their images faster. The same is true for the V-Ray material.
Yes, you could probably just give the users the components to build a new material. You could give them separate diffuse, reflection, and refraction components and let them create their own shader, but we realized that all users just want their image out. They don't necessarily have the time to go deep into rendering.
Marc Petit:
I want to underline one thing you said. It was incredible that you got the whole industry to pay for V-Ray while there was a free rendering and ray tracing alternative shipping with 3ds Max. I think it was quite a feat to achieve that. It's a testament to how important it is to be user-focused, bring people what they want, and make things simple.
I think, to be fair, Mental Ray wanted to optimize for flexibility and let people do whatever they wanted. And I think, in hindsight, it was probably not the best choice.
Vlado Koylazov:
For some users, flexibility is very, very important, and I recognize that. We do have some of these users as well, but honestly, most of the time, people just want a nice image output.
Marc Petit:
Were you the one behind those decisions about making the V-Ray material the way it was? Those early days, I mean, energy conservation, this was really, really new and really innovative, and you had a lot of trade-offs to make that shader work for almost everything.
Was this you, or how did they come to pass?
Vlado Koylazov:
I actually got the idea for the V-Ray material by looking at the existing ray trace material in 3ds Max, which had some level of energy preservation in it—not completely, but it was a good starting point. Also, I realized that the mass just doesn't work if a share is not physically accurate. I wanted to implement a technique called multiple important sampling in V-Ray. V-Ray was probably one of the first commercial renderers to use this technology.
But for this to work, the materials and the lights need to be physically accurate and preserve energy. I was forced to do it this way. It wasn't so much a conscious choice as just a necessity; it took a while to figure out. There were some different approaches that it could take, like how the different components interact with each other, diffuse reflection, refraction…
If you turn everything up, if you make all the components full strength, what happens? These were probably the choices that I had to make. Ultimately, they proved to be more or less correct. Not everything; we occasionally still have to update something or another, but overall, it seemed to be a robust way of creating materials.
Marc Petit:
One of the recent news stories from Chaos was the support of Blender, which has very strong rendering and a free package.
Why did you choose to support Blender? I think it's fantastic news for the Blender community.
Vlado Koylazov:
There was this developer, Andre, who had actually implemented the integration of V-Ray and Blender all by himself, using the public V-Ray SDK, and who now works for us. We thought that this was an interesting idea, but the integration was not, maybe, ideal. It's not exactly a problem, but just because Blender is open source and V-Ray is not, this creates a little bit of a challenge for us.
I would love to get very open-source as well, but the fact is that we can't afford to do it right now. We had to figure out some intermediate layer where the open source part would talk to the closed source part, and this is not very, very easy to do in a good way. We had this for a while, but then Blender changed radically. The SDK was completely different. What we had as code didn't work anymore.
For some time, we debated whether to rewrite everything. Eventually, we saw that Blender was really on the right path, and more and more people were starting to use It for all kinds of interesting purposes.
About two years ago, we decided that it seemed like the right time to revisit that project. So we took the source code, updated it for the latest releases of Blender, fixed all the issues, and, yep, hopefully, later this year, we'll be ready to give users something to play with.
Marc Petit:
A few years back, you announced Project Lavina, which I think was the goal to bring V-Ray into the domain of real-time. And Project Lavina became Chaos Vantage. It seems like it's gaining significant traction in the industry.
I'm very familiar with Unreal Engine here, and we know that game engines like Unreal achieve their speeds by implementing systematic trade-offs to favor speed over precision.
What kind of trade-off are you making to get V-Ray to run in real-time, and where do you think Vantage is in terms of quality compared to game engines?
Vlado Koylazov:
When we started working on Vantage, NVIDIA announced that they would be doing hardware ray tracing, which would be available through DirectX and the optics APIs.
As I said, my very early experiences with computer graphics were games, so I always wanted to go back to real-time ray tracing, maybe at some point. And I thought that was a good time to see what this new technology can do and whether it can bring this closer to real-time ray tracing.
We did this experiment in 2018, trying to ray trace a large scene with lots of trees, lots of instances, and global illumination and reflections. It actually worked really well. It was clear that we would need, maybe, things like denoising or some other optimizations just to get to a real-time frame rate. Because, even today, we can only do a few rays per pixel. It's not like in the production render; you may need hundreds or thousands of rays per pixel.
So, hardware is still not as fast, but there are now a lot more tricks that we can do. But I think the biggest advancement there was the noising techniques that came out around that time. They were crucial for making real-time ray tracing possible and making various sampling improvements. If you have only a few rays to trace, you have to make sure that they are the right rays that contribute the most to the image.
There was a lot of research going into how to trace things with manual light sources and how to sample GI more effectively. All these advancements made real-time ray tracing possible. But the trade-offs that we had to make initially were in terms of features. Obviously, we knew that we could not do everything that V-Ray could do in real-time, so we had to be very careful with the feature set that we supported.
As hardware got faster, we added the missing features bit by bit, always keeping in mind that it had to run in real-time. Other than that, there aren't really too many restrictions. In terms of actual lighting calculations, formulas, and mathematics, Vantage is really close to what V-Ray does. The quality of the lighting is the same; it's just various things that might work in V-Ray but not in Vantage.
For example, in procedural textures, with V-Ray, people can do complicated shading networks that can compute a lot of stuff. On the fly with Vantage, this was not possible, so we had to be really selective with the shading noise that we supported.
Also, regarding environment effects, like volume lighting, we decided early on that we would look at them sometime later in the future.
Marc Petit:
I want to come back to the DLSS, deep learning super sampling, because I think that was a revolutionary moment. I saw you were the first non-gaming product to incorporate DLSS 3.5.
Can you tell us what speed-ups you guys do with that? Maybe explain what it does to our audience because I think it's more about the magic of AI at work here.
Vlado Koylazov:
DLSS is actually an awesome development because it combines several things in one. Denoising is just one thing that it does, but it also does upscaling and anti-aliasing, which are both crucial for the performance. With other denoisers that we had up to that point, you still had to render the same resolution image, and the denoiser would just remove the noise but not solve the anti-aliasing problem.
If you can only do one pass per pixel, you can't achieve much anti-aliasing performance.
And while you can do stuff with temporal accumulation and things like that, it's just complicated to get it right. The DLSS denoiser just combines all these three features: denoising, upscaling, and anti-aliasing in a very powerful way.
Specifically, the upscaling is important for real-time because it allows us to render at half the resolution. We basically have a quarter less pixels. Basically, the speed goes close to four times the frames per second. So that was crucial.
Marc Petit:
Yeah, you basically render one pixel and guess 10 or 12 pixels around it. I mean, this was absolutely fabulous.
Vlado Koylazov:
We probably could not have done it by ourselves because it requires a lot of training data, then computational resources to build a model, and, of course, a lot of know-how.
It's very nice of NVIDIA to do its research and make it available.
Marc Petit:
Kudos to NVIDIA for giving us that technology.
Patrick Cozzi:
What about VR? The hardware capabilities tend to be quite different between desktop and VR; does Vantage to VR?
Vlado Koylazov:
We are working on VR right now. We have internal prototypes that work quite well by this point, so hopefully, this will be in an official release at some point soon. And yes, you're right, the requirements are different, and the frame rate is very important because otherwise, there's real discomfort. I myself really don't feel very comfortable wearing VR headsets, so for me, that particular problem is very important to solve correctly.
We had to do a few tricks. The upscaling denoiser helped a little bit, but there are also things like re-projection, where you can just generate intermediate frames so you don't have to actually render everything, every single frame that comes on the VR headset. It can interpolate between actual render frames. That's very important.
Marc Petit:
Let's talk about another form of immersion. You recently launched Project Arena. There is a very interesting video of you and your friend, Mihail, the CTO from Chaos, about Project Arena, which is your solution to bring real-time rendering to virtual production on LED stages. So, I encourage everybody to go look at that video. I think it's time well spent and you actually do talk about some of the challenges that you had to overcome.
Can you give us an overview of the problems you wanted to solve to get Project Arena going?
Vlado Koylazov:
We were always, as a company, interested in virtual production, even before we had the real-time ray-tracing engine. We tried to do experiments in 2015 with Kevin Margo and his CONSTRUCT project. He, at that time, was working at Blur, where they were doing a lot of game cinematics and motion capture. But the thing with motion capture is that, while you are doing it, motion capture won't have any idea what the virtual world is like and where the actors are in it.
Basically, you have to imagine what the whole scene looks like and then hope that you are doing the right thing. Kevin's idea was to find a way to visualize that motion capture data in real-time, both environments and virtual characters, driven by the actual people in the real world, actual actors, for motion capture, but be able to see all that in real-time, along with the camera.
We tried to do that with the technology available at that time, GPU ray tracing, and it sort of worked, but it wasn't really practical at that time. The hardware wasn't fast enough, and we didn't have the software advancements that we have today, like denoising, so we left the project at that point.
At the beginning of last year, 2023, we already had a real-time path tracer, so we thought about what other applications we could have for it, and virtual production came to mind.
We decided what it would be like if we went back to that project and tried to do it now. But this time, we decided to focus on LED walls, which was something that was relatively new. It wasn't really a thing back in 2015. Virtual production on LED walls seemed to be gaining traction, and it was a really good use case for real-time technology.
At the same time, we had conversations with customers who said it was taking a lot of time for them to convert their environments that they built in their usual DCC tools, such as Max, MyAcademia, or whatever, and prepare them for a real-time experience on LED walls.
We realized that we already have all these newer integrations into different products, and we can export those things very well into various scenes, and Vantage can read various scenes. The idea was that we would try to take that existing technology and use Vantage to drive an LED wall, with proper camera tracking and everything. That's how Project Arena started.
Marc Petit:
This production right now is the whole mark of Unreal Engine. There are probably many, many stages running Unreal in vector production.
You mentioned the workflow advantage: the data probably flows more easily into Vantage with much less conversion. Is there anything else about you taking up the competition there, bringing new capabilities, and staying ahead in the real-time domain compared to those game engines?
Vlado Koylazov:
First of all, just to mention that Unreal is great. It can do so many things. There is a lot of technology that's been built into Unreal over the years. We don't think that we can replace Unreal for all use cases, but we do believe that there are certain use cases where our solution has an advantage.
I think those cases are exactly where teams and people want to use virtual production but don't have the know-how, time, or knowledge to convert their project to a real-time situation.
This is where we believe Project Arena can actually help. Going from a DCC to an LED wall just happens in a matter of minutes, with practically no conversion, and you get results that are very, very close to what you originally built. This is where the advantage of Project Arena actually is. We think that this workflow can actually open virtual production up to people who didn't have the ability to use it before that.
Marc Petit:
Shout-out to Chris Nichols, who was running your lab in Los Angeles. He has been a virtual production advocate and fan for many years. He's also a fellow podcaster. He has been doing podcasts since the beginning of podcasts, I think.
He is one of the OGs of podcasting for computer graphics. So, shout out to Chris.
Vlado Koylazov:
Getting close to 500 episodes.
Marc Petit:
We're babies, Patrick. Complete babies right here.
Vlado Koylazov:
He has been pushing for virtual production this whole time. Over the years, he's been asking us to do something about virtual production. I was always like, yeah, it's complicated, and then it's a lot of work and whatever. But then, like I said, the stars aligned at the beginning of last year, so I thought, "Well, let's try this again."
Marc Petit:
No, look, the workflow makes a lot of sense with everything on the stage. Now, I'm turning around new scenes.
I wish you luck with Project Arena. It complements many of the things you can do with Unreal.
Patrick Cozzi:
We wanted to talk a bit about your role. I mean, as the head of innovation at Chaos, it sounds like the ultimate dream role. Could you tell us a bit about your responsibilities and areas of focus?
Vlado Koylazov:
At some point, V-Ray just became way too big for one person to manage. That's when we started building different teams that focused on the different V-Ray integrations. I didn't really have to work that much on V-Ray for 3D specs or V-Ray for Maya and so on; I could just focus on the actual rendering algorithms here. That's why we built an R&D team that was just looking at rendering technology.
Originally, this was mostly about V-Ray, but obviously, then we had Corona and now Enscape, so the team's role grew beyond just developing new technology for V-Ray. With everything that's going on in the area of artificial intelligence, we decided to build a dedicated machine learning team.
Last year, we tried to find and hire some very smart people who had been working on artificial intelligence for a while. They help us bring our products to the next level.
My job right now is overseeing all of that new development, not just new rendering technologies and optimizations and new products but how we can use artificial intelligence.
Marc Petit:
Outside of AI, you've been mentioning working on rendering, and you're still working on rendering and sampling. Is there a next big thing we can expect in rendering?
I know the Weta guys have been working on diffraction rendering for some time. What's your vision there? Where do you think the rendering field is headed?
Vlado Koylazov:
Almost everything you can think of has already been explored in quite a lot of detail. The things that are not covered yet are the really, really difficult cases for which we have no good solutions. I mentioned caustics, which is still a hard problem to solve. We need more realistic materials, and obviously, we can optimize things for as much as you want.
I guess, in terms of rendering, there isn't really a next big thing; there are just a lot of small refinements on existing techniques. For me, the biggest advancements will come from workflow improvements and just being able to build your scene faster.
There are any number of techniques now, any number of render engines that can turn that scene into a nice image.
Patrick Cozzi:
You mentioned a bit about scaling your engineering team. I was curious: how close are the V-Ray and Enscape teams working together?
Do you think, over time, the Chaos products might converge into one rendering engine?
Vlado Koylazov:
As you can imagine, we have quite a few renderers now. Every now and again, we go back to that question. Ultimately, I don't think that, at least at this point in time, and with the technology, both hardware and software we have today, I don't think there's one render engine that can do everything and do it well. The analogy I like to make is with different car models; even one car manufacturer has these different car models.
Yes, any car can take you from place A to place B and they kind of will all work in a similar way, but there's a reason why there are different models. There are more powerful cars, bigger cars, but also lightweight. They're electric versus diesel and so on. But there are reasons why different render engines are needed. And at least, for the time being, I think it's just not possible to have one solution to rule them all.
That's the dream, obviously, but it's just not practical at this point in time.
The clearest example is between maybe Enscape and V-Ray. Enscape is a very lightweight renderer, very fast, for a very specific purpose, and it works really well. But you don't really want to turn Enscape into a VFX renderer because it will not work.
As you start adding features, the renderer gets more complicated, slower, and more difficult, which will just kill the original purpose of the render. So that's why I think the best way to go right now is to have solutions that work for specific use cases.
Marc Petit:
Let's switch to AI. What do you see as the big impact that AI can have on this field?
You mentioned scene generation and automation. Can you elaborate on that?
Vlado Koylazov:
There are several use cases for AI. The most obvious one where they can help is image enhancement. At this point, that is pretty much an accepted fact, and we see customers using AI to augment their final renders every day.
One good example is CG people. Those have been traditionally really difficult to do just because it takes a lot of effort to do a realistic human in 3D and to render it out.
Traditionally, oftentimes customers will just Photoshop pictures of photos of real people on top of their renders. And with AI, this actually can be done pretty well. You just need to have a decent, or a semi-decent, 3D model or person in your image. Then, with some AI manipulation, it just becomes this very realistic person with a proper skin tone, hair, and everything.
This is happening every day, and it's an obvious application of AI.
Not only people, you can do the same with agitation and other parts of the image. And this is something that we are working on, and how we can bring this type of image enhancement to our products. There are a number of AI tools that can do this as well. There is Magnific AI, there's Craiyon AI, and we see people use them more and more.
Beyond image manipulation, however, I still think that it is valuable to have your scene as an actual 3D scene. Because if you have an image you like, but then you want to turn it into animation or look at a different camera angle, then you need the 3D information. Or, if you just want to, with AI specifically, directability is a problem if you just want to move things around just a little bit in your image, like move the objects in a different position. That's not very easy to do with AI these days.
It's either you like what you get, or you just type in another prompt until you get something that is usable. From this point of view, 3D content is actually very easy to manipulate in some sense. And unfortunately, it's not an easy problem to apply AI to.
Right now, the machine learning methods that we have at our disposal need a lot of data to work, which is why you need billions of images and text to train something like Stable Diffusion.
Marc Petit:
You mentioned stylization and material creation using AI?
Vlado Koylazov:
Materials are definitely an interesting topic, especially some types of materials, like different types of wood or different types of stones. Having a large library of materials helps, but it can't really have an infinite library. Being able to generate materials based on some user description is a very valuable use case; there are different ways to approach it. One way would be just based on a text prompt. You could generate an image that, like a set of textures that the user can use.
I think that's one of the differences compared to image generation. You have to generate several textures. There has to be a diffuse map, there has to be a roughness map, a normal map, or a bump map. It’s an interesting problem to figure out as well, how to generate all these textures. So they also, hey, make sense together. But not just from a text description.
For example, if you like a photograph and the material on it, you can click on the image and say, "I want this material," and the software will recreate it for you. That's an interesting way to do things and something that we want to explore. So far, we've done some promising research, but we'll see where it goes.
Patrick Cozzi:
We wanted to talk a bit about USD and MaterialX.
Mark and I are involved in the Metaverse Standards Forum, and we're paying a lot of attention to open standards and universal scene description. USD has been gaining a lot of traction for sharing and describing 3D scenes. I was curious about your perspective on its adoption.
We noticed that Chaos joined AOUSD. Could you talk about your motivation there?
Vlado Koylazov:
USD has been interesting for us from the very beginning when Pixar open-sourced it because it seemed to have the right, the need for a common standard graphics format for scene exchange. That need has existed for a long time, right? It's not something new. It's been around for years and years, and there have been various attempts to solve it with various file formats.
I guess the most recent one before USD was Alembic, which tried to somewhat solve this problem.
The need was there. The only problem was that all the solutions had various drawbacks. With the Alembic specifically, the drawback was that, first, the use case was very specific, and second, the standard wasn't strict enough. Different applications ended up writing things into the Alembic file in different places, with different names. Even though, in theory, you could exchange data between applications, in practice, it was not really that simple.
USD promised to solve all these problems, and standards are really only as good as the number of companies that support them. Stars happened to align around USD; many people realized that this is maybe something that they could use, not only in the visual effects where Alembic and USD were originally created but for all kinds of other applications, including architecture.
Today, you see companies like Trimble and McNeil integrating USD, SketchUp, and Rhino, which is great.
For us, as with V-Ray's generally, and not just V-Ray, all our products are basically one part of a larger pipeline. Our products are just one portion of the workflow for our users, and we need to be able to exchange information with the rest of that workflow. And it's super important for us to integrate as smoothly and easily as possible with the rest of our customers' tools. USD seemed to be the natural choice for that, especially as adoption grew with the various other tools.
Marc Petit:
You had your own representation.
All the Chaos products use a format called VR Scene. Were you ever tempted to open-source VR Scene?
Vlado Koylazov:
We have thought about it, but open-sourcing something is really not enough to make it successful. It requires a conscious effort to support your solution, even if it's an open-source one. I don't think Chaos as a company can do that. Somebody like Pixar or LM can do that, but it has to be somebody with some weight in the industry and somebody that people will rally behind.
Yes, I've thought about the VRayScene file, but it's a very specific format originally created for V-Ray, and it doesn't really handle a lot of stuff. We only store the information needed for rendering in VRayScene files, but there's a lot of stuff in the 3D scene that is just there so that people can manipulate it more easily.
There's a hierarchy of objects and all kinds of additional metadata that people need to work with a scene but that is not needed for rendering. From that point of view, the VRayScene format may not be the most suitable. USD was designed to be extended with these additional types of information.
Marc Petit:
Conversely, could you see VRayScene fade into the sunset and be replaced by USD for internal Chaos product communication?
Vlado Koylazov:
It's possible and certainly something that we have thought about. However, it's not the right time yet for us because USD is still in development, and new portions of the API are still being figured out. We joined the AOUSD because we want to keep track of USD's development so our products are as prepared as possible for the evolution of the standard.
But, at some point, when things settle down a little bit, I can see USD becoming the format for using 3D data, even internally, for our own tools.
Marc Petit:
When you look back at everything you've been doing in computer graphics, is there something that stands out as a significant achievement so far that you are particularly proud of?
Vlado Koylazov:
Proud of? Implementing the light cache in V-Ray, I think, was pretty important because this is what actually made it useful for architecture, and it helps to get this realistic multi-bounce lighting, especially for interior scenes, which made V-Ray so successful. Again V-Ray related, but I really like the way that the original VR architecture was designed. It's not perfect by any means.
I've done a lot of stupid things in retrospect, but even so, it's been over 20 years now that this overall software architecture still works, and it allows many people to work on the code at the same time. I think that turned out pretty well. Of course, if I started today, I'd probably do slightly differently, but I think it went pretty well for a newbie.
Patrick Cozzi:
To wrap things up, is there a person or persons or organization you'd like to give a shout-out to?
Vlado Koylazov:
All my colleagues, to be honest. A lot of work goes into our products, and the amount of code we've written over the years and the ideas that went into that code are really amazing. Yeah, I just cannot stress enough the contribution everybody has made. Building a successful product is a team effort, with a lot of people involved in development and everything else around it, sales, and marketing. It's just an enormous effort, and if you don't have the right people, then nothing happens.
I think we were very lucky in this regard.
Marc Petit:
So, a big shout-out to the team, still mostly in Bulgaria?
Vlado Koylazov:
Well, we're a bigger company now, so yes, the Sofia office is probably the biggest one, but we have people all over the world.
Marc Petit:
Vlado, it's been a privilege to chat with you today. You're a pioneering computer graphics, the co-founder of Chaos. Your innovations with V-Ray Advantage are revolutionizing the industry. You've been setting new standards for rendering technology. It was fun to see your commitment to pushing your boundaries, to pushing the boundaries of what's possible, and you continue to inspire and shape the future of visual effects, visualization, and now even virtual production.
Thank you for being with us today, and thank you for sharing this incredible journey and those insights with us.
Vlado Koylazov:
Thank you. Thanks for inviting me.
Marc Petit:
And thank you, Patrick, and a big thank you to our ever-growing audience.
You can find us on all the podcast platforms, reach out to us on our LinkedIn page, or visit our website, www.buildingtheopenmetaverse.org.
Thank you, everybody. We'll be back for another episode.