Real-time Graphics in Live – Past, Present and Future

4:50pm in Studio A

This session will discuss the contribution and impact of real-time graphics & rendering on the live events world, its history, the current state of the art, and where things may go next.

Download Transcript

Matt Swoboda  00:12

So you know, you might know me as the developer of notch. And you’re probably expecting me to talk about notch. But I’m not going to actually I want to talk about something else. So, although probably will come up, let’s be honest. So I’ve worked in graphics for like 20 something years, I’ve worked in video games before I worked in life. So I hold either long career before I’ve worked in life. And I’ve probably been doing graphics programming for about 30 years. And what I want to talk about today is the relationship between the real time graphics industry as a whole and the things that have happened in it and how it’s impacted life over the years, like how that that two way street goes. And also talk about so I want to talk about the past, like the distant past all the way through to the present day, and also try and make some predictions about the future, like what’s going to happen in the near future. There’s always a relationship between hardware and software and its users. Like software is developed for a certain industry, and users use it on an industry and they feed back, they ask for things they want changes and the software and the hardware change to accommodate them, right. But at the same time, the software and the hardware impose limits or creativity on the user. Like people, you know, there’s there’s so many cases of this, like, imagine, shows 1020 years ago, you that heavily use the creative effects possible by like the image processing in a catalyst or hippo, right, or the mapping possibilities for d3 or people all the shows that went out with the stock notch effects back in like 2015, or something. And all like all the rocky landscapes you saw when new E five came out, because it was suddenly easy to make good looking rocky landscapes. But there’s always this case of, you know, we got a two way street, the possible the technical possibilities feed what he’s done creatively. But the issue with a live video world is that it doesn’t have a massive hardware and software development capability in the graphics. Let’s be honest, right? Most of the don’t work done on graphics, like people making GPUs or large engines, they’re not making it for life. They’re making it for other things like video games, like people don’t build a GPU thinking about life sadly. So while there is a two way street and why there is stuff developed, particularly for the case of like berries, stuff developed for life, there are software companies working only in life, let me know about media servers, and so on. Like a lot of the stuff you use wasn’t made for you originally, it’s been things that you’ve appropriated things like After Effects, or cinema 4d or Unreal Engine, these things are not originally made for for you, but you’ve had to use them. So the relationship in a way between graphics and live, it’s more of a one way street, like you’re getting stuff pushed on to you that you use and you have to take advantage of. So that’s why I think it’s important to understand the background a bit like of how this stuff’s come about. When we think about like, think about the hardware capabilities, like when the Kinect Camera came along, all the tracking that all the things that were now possible because of that that was a Camera built for an Xbox, it wasn’t built for the many interactive projects that used it. Well, like the LIDAR scanners we’ve taken from automotive and use them for tracking the the, all the way through to the present day AI innovations being done by NVIDIA the virtual backgrounds and so on. They’re really useful for life, but they’re not made for life. And you often had to squeeze them into a project and deal with their creative limitations has anyone who had to wire up a Kinect Camera to make it work on the stage? Well remember, and things are only gonna go that way. Like a lot of the tools being developed now. There’s still not developed for you. So I want to kind of go back through history a bit and see how things have come about. And yeah, I’m gonna rush but fine. Sorry, first of all, that. So, I’m gonna go right back to the Commodore 64. I’m going to skip really ancient history and start on the eight bit era, right? So this is the start of the home computing boom, like this, this Commodore 64 and is edX. And this is how games looked back then, like the hardware was massively limited, very low color palette. Like you really couldn’t do very much with it, right but and there was other hardware around there was like Silicon Graphics and workstations. But not many people could afford those a home so they will not fully explore creatively in the way that home computers are explored. So at the same time, there’s video games, like you see a couple of well known games to the very 2d Low colors, the same thing like the cracking scene spun up in the 80s. So this was the scene of take Seeing, taking a video game, removing the copy protection and putting it out, flogging it down the market, right? But what happened with crack cracking seen is that I loaded the people who were doing the cracks wanted to make themselves famous. So they started putting little intros on the beginning of their video games, when they’ve cracked them, and you know, to show their name and so on. And this was like the real start of a creative coding scene on computers. People who are taking this kind of, you know, they’re doing these creative things just for the hell of doing creative stuff. And this later grew into the demo sing. Fucking I keep pressing the wrong key. The right key. So like, move forwards into the noise like late 80s 90s, and you go to the Amiga. The Amiga got the 16 bit here and now. And video games have come a long a long way, like the visual qualities come along with and wait like you’ve got a lot more colors. Now you can do your 256 colors, my God, and even like basic 3d is now possible. But at the same time, the cracking scene has grown into the demo scene. And demo scene is like the original the OG like creative coding, making linear, non interactive experiences in real time on a computer. And more video, you see our video games that back then demos like this, demos look better than games, they were more creative. And they were more interesting, they could push boundaries in a way that games couldn’t. If you’re into computing in the 90s, and a home computer and you had an Amiga, if you’re into pixeling, or doing music on a computer or so on, chances are you’re in the demo scene, because that’s where it all happened. Making graphics on these platforms was really difficult. This is like pre internet, really. There’s not really many examples going around like he used to have to buy magazines and type stuff in off the listings to get you to do to try stuff out. If you wanted to figure stuff out, like if you want to do graphics, you have to figure it out yourself. And if you wanted to write like if you want to write some 3d, you have to write your own polygon fillers, your own transformed processes, you have to write everything. So there’s really difficult and it was very time consuming, which meant that you had to have a very high technical bar just to do anything, right. And so people did use it. And like there was a few examples of Petia people DJing or fair amigas and so on. But this is very early days, like there was a relatively limited amount of that. And often while the the output might be real time the process to build it was often very much not. But there’s only one harbor example from there. I don’t if anyone remembers the Video Toaster Does anyone remember the Video Toaster? Classic this so this was like made by India by the guys who made NDI the new tech, who we know and love. This was like the this is an add on car for the Amiga that was pushed to meet it pushed Amiga into broadcast it was used for things like Babylon five and various kinds of things. And their trailer videos just amazing, I think is the real piece of from the era that enables some live video manipulation, capture, Chroma, keying, and so on. Right. So it was the first like think it was a very early notch. But in hardware form, like had these kinds of all these kinds of capabilities, this is very early back in the day. So at the end of the 90s. This a great fit, I love the Sphere. At the end of the 90s there was a major shift to the first GPUs appeared for PC. So we’re talking like 97, I think 3d effects. So a GPU at the time, all it was really capable of doing was being given a list of polygons and filling them. That was all they could do about them. And they could do them a lot quicker than then you could write in software. So video guy should have just let that run the whole talk that real is great. So this is how video games now looked. After the GPU came along the battlefield. 1940 was released in about 2001. Good example. You can see by the end of the decade, we’ve got the same developer, this is how stuff looked. So the big thing of the GPU was not just the hardware, but the API’s that came along to support it. So OpenGL DirectX X came along. And combined with the hardware, it meant that you could get polygons on screen very quickly, you could write 10 lines of code, and you can have a polygon screen, rather than writing 1000s of hand optimized lines of assembler to achieve the same thing when you’re in software. So this at the same time as the internet, the internet came along which was widely accessible, which meant that you have a ton of tutorials and the software and the hardware there to make graphics like you could actually do graphics without being a very good coder. And this is arguably what led to the booming creative coding. This is why creative coding became Is it because all of a sudden, it was quite easy, you could do it. Yeah. Particularly when you take in the in conjunction with things like open CV, which gives you a load of tools to do image and video processing and analysis. So all of a sudden, you have these bits that you could plug together. So you see in the piece of like, on the PC games in the 2000s, the rapid GPS got fast, really quickly, right really, really quickly. So everything went 3d. So initially, all the GPU could do was shake triangles. So games like the one on the left, they looked very basic, like textured, simple stuff. But over time, they became programmable. And you could actually write code that ran on the GPU. And this really enabled people to do very clever materials and so on. So by the end of the decade, like the possibilities that leaked enormously, and as had power hardware, like PC demos, like demixing, went on PC as well. And the thing with PC is that they made much more creative use of the GPU than video games could go to the like, you’ve got the demo on the left, everything is made from particles in 2005. So the 2000s As I said, it was really the golden age of creative coding frameworks such as open CV, so sorry, open frameworks, and Cinder appeared. So you only ever used open frameworks, yeah, must be some. And this made it even more accessible to code graphics, you could make a piece of graphic and interactive graphics project very easily just using their toolkit and their samples with and it has support for video playback and Camera capture, and image processing from Open CV and so on. And one of the capabilities were pretty limited. And compared to games and demos, it did not look spectacular, it was often good enough to do something creative. And the the limitations often drove the aesthetic, right. So people knew they could make flat shaded polygons look good. So they use flat shaded polygons and lots of texts and this kind of thing. But at the same time, you had a powerful enough Projectors that were viable to do and cheap enough to use in those kinds of shows. And you have a MacBook Pro that’s got a GPU good enough to run stuff live in real time. And so suddenly, you saw Projection mapping came along, like people started to do Projection mapping a lot, because you had the intersection of Projectors that were good enough to hear you get 15k projector Chucky on the building in the dark, it looks good. You can run them off a MacBook Pro is good enough. And you have the software toolkit available to do it. So like why do you think these things happen, because the things come together that make them easy enough doable enough that people did them. We also started to see the first kind of real time tools. So Maximus B is quite a lot older than this back from the 80s. But touch designers first version was 2002, I believe, they started to see the first tools. And also have to mention UVA here. But this is I think UVA is exemplary of a range of creative students at the time. So what what UVA did really well in my opinion, was that they took these kind of creative, these technologies like the low res led, they had the toolkits for creative coding, and good good engineering as well. But they really found an aesthetic that overcame the creative limitations of what they were dealing with. So they they went for black and white with a little bit of red on led and looks good. It punches through lots of strict texts, lots of texts, rather than the a 2d text rather than going for 3d looked a bit dodgy. That lot of text, often very structural, simple forms that when you put them on led, they look great. They took what was even back then for the time, technically, it wasn’t that there wasn’t that advanced, but they made it look good. They made the art. And there was a load of studios doing the same thing at this time. But I think UVA were arguably the one of the standouts for the kind of quality of work. And the fact they really made it art. So towards the end of the decade, there was another major development. So GPUs went past just filling triangles, right? They weren’t just filling triangles on screen anymore. They were actually capable and fast and flexible enough to do other stuff, not just not just fill triangles, but actually do other bits of the pipeline. So Pete, like a cue that came along with that kind of time. And people started using them for simulations, finance, AI, like crunching anything that needed a lot of number crunching you could do on a GPU. And this is something that games actually were quite slow to adopt because they already made the GPU very busy. So it wasn’t really it didn’t really take off and video games for another five years or more. But for live and interactive projects, it opened up some really interesting creative capabilities. You don’t necessarily need loads of 3d and life projects where you want to do is A nice interactive particle effect or a smoke effect or something like that. So that enabled, you could do that now on the GPU. So all of a sudden, you start seeing these kind of projects that use a lot of particles, a lot of love Sims like that, because you could do them on the GPU, you could do, you know, 10 times more, they meant that they actually looked good enough to be viable. Previously, you could do it, but didn’t look good enough. So you didn’t want to do it, clients didn’t want it. Now you can make it look good enough, it was worth doing. So look at PC games in the 2010s. So I’ve shown two versions of the same game. So you can see how far visually things came in around 20 years is astonishing, move forward, who have they tried to target photorealism. Sorry, daybreak, the one of the most obvious changes in video game development over those years was the size of team that responsible for production. When I started in video games in 2001, you can make a triple A game with 20 to 30 people. By the time by by the end of the decade and the middle of last decade, teams were sort of 300 to 500. But the increase in scale was enormous. And most of the people doing that were involved in the art production. All those all those assets are those many variations of crates, and walls and so on, they all had to be made by someone. And the quality of assets demand, the demand for quality kept going up and up. Like back in back in 2001, you can make an asset with a few 100 Polly’s and one texture, you could do it very quickly. But when you have to make 10,000, and loads of textures and detailed materials, the production process is much longer. So you have to change the processes to accommodate it. So the first thing that changed heavily was the use of outsourcing and stock libraries for art production. And initially, it was outsourcing because they weren’t an external libraries by them. So rather than build everything in house, you spec, the modeling work and you send it out to India or something where they build it a lot cheaper, and they send it back. And this is something that I kind of imagined would be done more in life, but we don’t really see it, you can kind of see as demands and production goes up, you have to split the thing up into pieces that can be distributed out to other places and been more cheaply, more cost effectively, right. So rather than have all the in house team doing everything in the game, the in house team is just doing the lighting, the finishing the hero characters, and so on the stuff that’s worth doing in house, but all the all the props and the environment pieces they’ve done outside, they’re done, they’re outsourced. So back in the early 2000s, when all the graphics were done in house, everything could be bespoke, like you can make different things for every location, you could paint the lighting into the vertices or into textures, because you know it’s going to be you just paint it in. And you could do it all by eye, you could just guess like, do it all by eye make it look good. Every engine had its own look, its own lighting, material look and its own calculations. But with the higher demand for Content and for outsourcing, you have to standardize everything, because you need to be able to eat studios to be able to build stuff without having to use the internal engine, right. So this is where PBR came in. So same people familiar with PBR, but 3d world. So PBR, like Physically Based Rendering Physically Based Rendering. This is pretty much standard, like every graphics tool these days use has the same kind of material pipeline and same look, why do you think that was done? It’s not done because it looks better. Arguably, it does look better, it was done, because you need to standardize everything. In order to be able to outsource graphics, you need to be able to push it out. So PBR. Rather than having these kind of every engine having its own calculations, its own material, its own parameters, standardize it all the standard set of equations, standard textures, standard parameters. That means you can share stuff between tools and engines. And it also means you’ve separated material and lighting. So it means you can make the material and it works in any in any lighting environment. Like if you move it to a different light environment still looks like the right material was previously it was all fudged. Like it was all done in the material to make the lighting look good. So the adoption of PBR it meant that an outsourcing studio could build something, build an asset, test it in their own environment with reasonable confidence that when they move it to the the game engine, they’ll still look good. So more recently, we’ve seen stock libraries appear such as mega scans, right, it’s great, great library. The challenge was stock assets is the art style, like the assets you use and when you’re in a project have to fit the art style of what you’re doing right you can’t if you have a cartoon thing and you use a random kind of side Five character, it won’t look right, it has to, they have to all be done in keeping. So if your game is stylized, and you’re gonna go to a stock library, how are you going to find a load of stock that fits your style. So it’s interesting, Brian Charis from Unreal was discussing this. So Unreal has made a massive drive towards photo realism and handling much higher poly assets as well. So if photo realism is your art style, then all you need from your assets is that they’re photorealistic. You don’t have to worry about different styles, it just has to look real. So it’s like if you’re, if you’re going, if you’re making a movie, and you need some props, you’d go to a prop shop down the road, borrow the props. If you’re making a game, you should be able to go to the virtual prop shop, buy a load of virtual props and drop them in you don’t have to make everything anymore. It massively reduces the production cost. That is the theory anyway. So you can see a lot of the drive like the engine work has been partly because photo realism is very popular. But also because the standardization is really useful for production. The irony is that photo realism used to be considered the difficult expensive thing, right, it was really hard to do photo realism. But nowadays, you can see that in the future, it’s probably going to be that photo realism is cheap. That’s the easy thing to do. Anything stylized is going to be difficult. This does have some sort of relevance in life. Like there’s not that many live projects, which are photo real acts outside of virtual production, there’s a lot more stylization involved. And it’s also unusual, particularly in music, one designer builds a whole track, typically, you don’t have production flow. In video games, you have a whole pipeline in production, you don’t really see that so much in life, B is inevitable. I think that as projects, live projects have to use more 3d, you have to get bigger, you have to explore different ways of getting these assets. And then there’s the engines themselves. At the beginning of the 2000s. There were very few engines like third party engines used in games like I think a renderer were and not very many others. They will use in the industry. But by and large they are rendering toolkits, or their level designer level tools designed for a certain game, right? Unreal Engine just started unity didn’t start to the middle of the decade. But by the end of the decade, Unity unreal, had a massive presence in the video games industry. So while internal team is making their own engines could be unreal or unity in terms of performance or in terms of rendering quality. They couldn’t match them for toolkit for the tools, like tools made by game teams had a notorious reputation for being a total garbage to use. And the only people use them were the people who are paid to use them because they had to, like no one wanted to use these tools. So but when the game engines like one the engines had their limitations back then particularly the appeal of having proper tools with like a working manipulator that you could use to drag stuff around. And so it’s just too great. So game teams started asking they wanted the engines they wanted the user experience. And this ultimately led like there was so much concentration of development in the engines, giving this massive feature set that everything from a whole pipeline for baking lightmaps and optimizing LEDs all the way through to the rendering. He just couldn’t match it in house. So the other advantage engines had over in house teams may announce engines making games and making game engines is the marketing and commercial development side. It’s a bit slightly controversial, but publishers started to push. So engine started in marketing. And they started to do deals with publishers, publishers started to push engines onto projects, because they wanted the engine on the project or because they were told that it would be better if they use the engine, like this game will be good. If we use unreal, this game will be good because we’re going to use Unity, it’s less risky to do this project because we’re going to use Unity, that kind of thing. It started to get harder and harder to get funding for projects unless you were using certain engines. And this really led to a decline in a number of companies who made their own tech. Like then, you know, 20 years ago, everybody made their own tech now very few companies actually make their own tech. It used to be one of the key roles in video games like the the graphics engine programmer, and now it’s like it’s a decline. It’s a dying art. So well this unfortunately means this if you did want to make a custom engine, it’s quite hard to find the people who can do it. So I think in life, we’ve seen a similar trajectory right? Over the last decade like originally 2000s Everyone’s creative coding. Interactive and stuff was often considered a bit of a science project because it’s using custom code. Now Now fast forward to last decade, middle of last decade with things like touch notch you use don’t do custom coding anymore use these tools, and it’s considered far less risky because you’re using a software off the shelf and as risks Oh, creative coding is really dropped off like people still do it. But there’s a bit of processing or 3ds. But how often do you see in the life reduction someone going and writing and openFrameworks app anymore? It’s much less common. So, you know, we have to talk about not not just, you know, has been somewhat responsible for that decline, better or worse, right. But these tools have really increased the accessibility of making real time in projects, like, even 10 years ago is still considered a risk. Like nowadays, real time stuff is everywhere in projects has become a standard. Like when I used to talk about this 10 years ago, people gave me funny looks like this, you know, why on earth would you do, it’s never gonna happen. And now it’s pretty much standard. It’s like, it’s also in the 2010s, as well, like the video games industry really split. So there was a split from AAA, to indie. Whereas previously, there was a lot of games kind of at the mid level. This is like, in the old days, every game was sold in a box. So you, you are guaranteed that it might end up in the Walmart bargain bin, but someone’s Gran is still going to buy it for their birthday, see, still sold units. This the problem was when when things went on line, when it will download them into mobile phones. And when downloads, you can put a game out on download, and you can sell zero copies. Like I know, I know some guys who did a six a ps4 title and download it sold 64 copies. But the risk became massive. So rather than always guaranteed to sell like 100 200,000 units, now you can do something, it could sell millions, or it could sell nothing. And the risk is huge. And also the cost is so huge. That a lot of the like the big titles like Call of Duty and ship it like there’s a very few number of very, very big titles that people play for very long time. The industry is very risk averse. There’s lots of sequels, it’s all franchises and sequels. And then there’s indie doing all the interesting stuff with no money and all the risk. And you can kind of imagine this being true in life, right? As well, as the cost goes up, the demands go up, Are people really going to be able to do these risky things, when there’s a lot more stakeholders and a lot more money involved, or is the risk and the interesting stuff going to drop down to the smaller productions that can take the risk. So the other thing that talks about the engine companies again, like the other thing, they ended up the other relevancy engine companies is the size of companies involve live events is used to small companies, you’re used to dealing with small vendors in the in the software side, right, like a typical live events software company is 20 People 10 People is small. And that the vendor is often the similar size of company to the client. And that means the client, like, you know, your companies might be 20 people, 30 people, some a lot, some are a lot bigger, like some of the some of the clients are much bigger than the companies providing this stuff. And that means the client has a certain amount of influence on the vendor. You can ask for stuff you can get support, you can say like, I want this, will you do it? And they’ll do it and they’ll get involved. When the scale of companies I think well, we’ve seen that, particularly over COVID. And over the last five years, the scale of companies involved in the space has gone through the roof, they’ve like we’re talking billions. Unity is a multi billion dollar company. And so it’s unreal. And they’re much bigger players than any of the players in the live events industry. So the power balance has shifted theirs, and they will they influence in some ways, like sometimes directly via things like micro grants. And sometimes like, you know, some companies say we will support you, but only if you give us the maximum rights, and others are able just to wield influence by doing marketing. And what this means for you is it means like you’re not it’s not just you, they’re marketing to you. They’re marketing to your clients. So your clients are going to come along and go, I want this because I’ve seen this ad on Instagram, it looks super good. Everyone does it. But the scale and power of these companies is a shift for live events. I don’t think live events is used to dealing with this kind of scale. I think there’s a mindset adopt the change that has to happen in mindset. So, the reality is I’m gonna go back a slide first. But the reality is that many techniques and tools and engines in the graphics world, they’re taken from video games, like we’ve repurposed them from video games. So they developed first for video games, right? I keep banging on about it most of the talk. Video games, they only really suitable for a certain tech stack. They only do certain things. There’s some real key differences between video games and what you do in live events. That means that the text doesn’t naturally translate all the time. You think a video game renderer the graphics engine is pretty narrow actually. All it really has to do is put objects on screen. A lot of the geometry is static. A lot of the lighting is baked or or the light doesn’t move like this either sunlight that doesn’t move or it’s inside with lights that don’t really move. The Camera motion is controlled by the player, so it’s very predictable, you know where to count and it’s only one Camera doesn’t cut very much. Animations are often baked. Visual effects like smoke can be handled just by pre rendered cards that don’t need to be interactive. The tools are often targeted at very large teams, and processes in place to bake lighting, optimized materials build lots do this whole process, right. And they also have to make everything play nice to streaming. But they have to make environments where there’s predictable, like you’ve done that, you know that you can slow the player down to make sure they’re not going to pull too much in too quickly. AAA games have a massive optimization cycle, the last year of development of triple A game is usually taken up with optimization. Like it’s an enormous amount of work. And also optimization includes things like fixing the assets to work out a bit of aliasing on an edge of a box or something like that. It’s all these little fixes. Video games don’t have to worry about splitting renders across multiple machines, you basically only have to worry about going to a 4k TV most. And if you aren’t running 4k is going to be upscaled, no one renders 4k, you render lower than HD and you upscale it and they accept some quality drops in and so on. In order to maintain frame rate, they don’t mind if the Render Quality drops. In live events. I don’t believe clients are that tolerant. If you drop frames, or drop quality or pop a lot or whatever, right there, the demands are different. And also the production process in life is completely different. You don’t have a year of optimization genuinely do know. So your turnarounds to be super fast, you have to optimize and put you don’t have time to polish and optimize in a way of video games, that’s unfortunately. So I’m gonna talk about a couple of the current or future graphics techniques that are kind of that you should be worth thinking about in life a bit more. So since you’re interesting with Unreal, like Unreal introduced nanites, and lumen, like Nana is a clever, really clever thing. It’s a combination of mesh clustering, and simplification for rendering polygons. And they have a software rasterizer as well for doing very tiny polygons really well, which is something GPUs do very badly. But this is really this engine is an outlier. This is not where everyone else is going. They kind of the reason they did this was they had to target the current generation of consoles, which didn’t have a good ray tracing part, the rest of the world, the non the non games world is going towards ray tracing. It’s interesting. This is another one, the two big developments in the last few years and going forward have been GPUs gaining ray tracing capability, and AI. So let’s talk about both. But first, let’s talk about scalability. So live always wants to handle bigger renders and bigger renders like 100 million pixel canvases. It’s about it’s about set and so on. And you want to split your renders between multiple GPUs on multiple machines, which you can’t do on one machine. That’s something no one else has to do. Like no other like games that have to do it. No one has to do this except life, really. So you’re kind of outliers an outlier. And typically, it’s achieved by the techniques, the problem is the techniques that are used in a lot of products. They’re not designed for the case of being split. So it doesn’t necessarily work that well. Things like the shadow generation or probe generation that has to happen the same on every machine. So you add two machines, don’t double your performance, it might go up by 1.3, something like that a lot, the more stuff that can’t be split, has to be has to be shared and done on machines. And this requires some engineering and needs live targeted engineering to make this work. Well, that isn’t really happening that much at the moment. The same is true of simulation. So if you’ve got a big particle effect, it’s probably has to run the same thing on every machine. So if you split it, it’s not going to make any improvement, it’s not going to make any difference. So but some of the time there is Technology coming along like the Mellanox networking parts that enable stuff to be streamed directly to a GPU GPUs with a lot more addressable memory. So you can have video lists there in memory, these kind of things do help in this area of Technology. But really, we have to think the future that we need to develop. But hopefully we’ll have the next couple of years is the ability to design stuff to be split and scaled more effectively. And the future of that is not cutting a render down the middle and put it on two machines. The future of that is breaking up the pieces of the pipeline and doing them on multiple machines and streaming them like you will render farm like we don’t want to build video games. We want to build render farms, but live real time ones. So one machine does the particle particle effects and it streams it to the next machine which does the shading and lighting and it streams it to the next machine that outputs it on screen or several machines that output on screen. And then you have another machine that might be doing post effects for piping or bits of processing and piping it in. This is the kind of thing that needs to work. So talk about like Ray tracing was a major. But the, the RTX came along and they introduced the ray tracing parts of GPS can do ray tracing. Now, this is really great, the main path tracing has become viable, that you can actually do, you can do this now, a path tracing is a way of let’s use a light simulation method where you cast loads and loads of samples of a ray that bounces all around your scene works out what it hits and shades gives you a color. And if you do this 10,000 times per pixel, you get a nice result, you get a very realistic render. Problem is this because it’s noisy, it takes a very large, tight amount of time to deal with a samples. And this is a huge problem, even for rendering out even for offline, it’s a massive problem. If you’ve got minutes or hours for frame, it’s not very practical. So in real time, complete nightmare. And it’s also very problematic for high resolutions, because you have to have more pixels equals more time, 10,000 times of pixels. And but at least with Ray tracing on the hardware, it became viable to do this. So the other thing that actually this created releasing ray tracing hardware, it’s not the most interesting thing about it wasn’t the hardware itself, it’s all the research that got kicked off when that hardware became available. So suddenly, like the the niche, a very niche area of research before of making path tracing look good, with very, very few samples suddenly became the big hot topic, right loads and loads of people putting effort into this. So actually, it became quite viable to like, you can see on my example, here, I’ve done a rendered in notch course, like 2000 samples and how it looks a one sample. So if you did this in the old version of knotch, it would just look like noise, it would look nothing. But because of all the research that we’ve done, including by ourselves, we’ve managed to get a one sample per pixel image that looks quite Bible like it’s noisy, even render even put it on screen. But it works. one sample per pixel, we can do real time, I can do that in real time. And the advantage of this in Live is that path tracing splits perfectly. He doubled the number of GPUs, if you double the number of cores, if you double the number of boxes, double the speed. So it splits linearly. And that is a real key. If you’re going to go to big canvases, you need techniques that split as close to linearly as possible. And it also Parcheesi also looks really good. It’s a lot easier to make renders when you have a path tracer because you don’t have to fiddle things and fake things, it just works. So as this becomes viable, the live this is really like this, it will make production easier. It also scales very well. So whereas Nanai had to do a huge amount of work to make a scalable render, path straight ray tracing scales better that way better linearly, it just works. If you have very high poly scenes, it works. So the other thing with sequences, the other part of it, so I took the same one. I couldn’t find a I still flickering, flickering and buffering buffering is way more volume. And this is actually not that far. Now, some of the pendulums, particularly if so I guess like the elephant in the room, ai ai, you can’t really try and compete. I mean, I don’t really want only one snake. But this is tricky for me, because it’s not up to you as a software weapon. But the moral aspect of this is a little dubious, because some of you users hate it. And some of you users love it. So it’s a very hard place to be for a software vendor. These stay with us in mid journey, the idea of taking all of human creative output and putting it in a box and selling it back to you is definitely anathema for some people. It’s kind of until the legality and morality of this result, it’s dodgy. However, ml has way more applications than generating images. So we’ve already seen AI and ML being used quite a bit already. So here’s an example of we use the NVIDIA body tracker in knotch. This was a background removal. AI for computer vision is an amazing tool like you can solve a lot of problems which require a bit of fuzzy thinking perfectly with AI. And there’s been a huge growth of these things. There was a paper last week on solving green screen using AI and they didn’t do what you think they do. They didn’t use it to remove the green. What they do is they say you shoot you have the green screen and you shoot everyone in magenta. And then the AI regenerates the green channel. You cut the green out, get rid of everything and then you just regenerate the green with AI with AI. If you can shoot everything Genta sounds like a very viable use case. So there’s been a load of, there’s been a load of ml being applied to graphics. And this is way more interesting to me. So we’ve seen things like de noising. We saw already, like dlss is something you’ve probably familiar with anti ai de noising. These are these are kind of early use cases that are really applicable. So ai de noising is quite logical how it works, you take the noisy image you and you train the noisy image against the fully refined image enough until the thing knows how to turn a noisy image into a refined image is put enough for it, then it works. dlss is another similar example. So you train the neural net on motion vectors and like alias to anti aliased images. And then it works out to do it and you get small neural net that you can run in real time. There’s loads more interesting kind of use cases in the pipe like appraising simulations, so you generate a low res, smoke sim, or you generate a particle sim with like 100,000 particles, and then use a neural net to upscale it to like 10 million 100 million. These kinds of material compression, like you take a so there’s already the use case of taking albedo, like a color image, it generates normals, and speculum and so on for you. But then people are using it for compression now. So if I have a material, and I have a load of channels in the material, rather than storing all the different channels separately, I use a neural net to generate the other channels from the albedo in just the neuron that’s trained on that one material so knows how to regenerate it. This is really this is where things getting really interesting, I think it’s that all these kinds of problems that are now having ml apply to them is going to really increase the rate of movement. But for life, I think particularly like you’re seeing AI take on things that previously required hardware to be done. So instead of having, instead of doing mocap, in with a suit, you do it with a ml do with a neural net. Instead of doing Camera tracking, do you use a neural net, you just have a Camera and the neural net knows where it’s going? Do you need to track points, can you just get a neural net to do these kinds of things if when they work. And when they’re brought in. They obviously make it massively cheaper and more accessible to do these kinds of technologies in a way that now all you need is a Camera, you don’t need a Camera and a huge tracking system. It’s not going to be a need immediately, like films, high end productions, and you’re going to stick with your Stipe system for some time in this room. But when you’re talking about theater, or live performance or something like that, if the cost is brought right down, because it’s done in software, it makes it way more viable. Right. I think I’m just over time, so I’m gonna end. I didn’t talk about notch, but we just entered private beta, please sign up once this is 1.0. It’s been a very long gestation, a very painful one. And we are now at the stage where you can sign up and get access and try it out. Please do that. Please sign up into your get access. Because I didn’t have enough to do I decided to build the presentation in notch. I’m not sure if this is the best idea of my life. But I can show you this is 1.0 in action. Oops. So yeah, I cleverly build the slides in notch. I don’t know if I actually worked quite well. It’s quite a big see we’ve completely redone the UI we’ve redone all of the engine has changed as well. There’s some really nice new features is incredibly powerful new rendering engines. Everything is new. So please grab the betta when you were able to check it out. I’m going to end there any questions? Questions? Any? Everyone wants to leave? Don’t they have no questions for Matt, if anyone has any questions, you have mics on either side is wandering back. Go for it. Or no microphone microphone. We want to hear you.

44:23

You don’t have to answer this. But did you have one of those cool hacker names for some of the software’s?

Matt Swoboda  44:29

Yes.

44:32

Think

Matt Swoboda  44:33

No, I’m not gonna tell you what it was.

44:35

Offline offline.

Matt Swoboda  44:36

On the internet. Pablo runner, this guy right here

44:48

those are private beta, say demolition on the error window.

Matt Swoboda  44:53

We’ve weeded those out. That took some time. We had to change the registry man. We had to change everything to get that to work. That was not As easy as you might think, getting rid of that. Congrats. There’s a new docking system as well. So you don’t have to find the one pixel that you’ve used to the panel. How does it feel to have arrived at 1.0? It doesn’t feel like we’re quite there yet. Like, it’s gonna feel really good when it actually comes out. But yes, it’s been a long process, like turns out, redoing your entire UI takes longer than you think it would. That’s why no one wants to do it.

45:34

Yeah, a new 1.0 rollout, where’s the best way to get resources and training on what’s new and what to find?

Matt Swoboda  45:42

So in private beta, there’s not very much but there will be someone has to go and rerecord all of the training Content because the UI is changed. But there is going to be tons and tons of the manuals already up. We’ve kept the manual, we’ve made a new manual. So we kept the manual up, but there’s going to be tons of videos and resources. Also, it should be a bit easier to use now. We hope we actually tested it or real people down here probably got one before you

46:08

had a question for you. So with this new version, is there any new use cases or challenges that you’re the main ones that you’re trying to solve thinking about this?

Matt Swoboda  46:17

So I guess the the big pitch we apart from the UI, there’s a few really big fundamental changes. So I should have just talked about notch No. I’m sorry about that. The also the rendering engines that will completely redone. And that was really done, we wanted to achieve a unification of renderers, you have an offline like a very high quality, very fast path tracer that runs in real time by refines. And then you have real time versions with their real time renderers as well. But they’re all designed to be interchangeable, you can switch them by switching a node. And they’re all designed to have the same look. Because we’ve done the part we did the path tracer first I made made the other engines try and match that rather than like a bad mix as it used to be. So this means that you can switch. So if you want quality, you can just switch the renderer. If you want faster, you can switch to the different render. And this, this if you’re in a hurry, or if you want to use the same thing offline or rendered Are you the most one of the most useful things is you can see how it should look if it’s perfect. And therefore what adjustments you need to make to make the real time one look good. So the render was was a huge undertaking. Also the the field system has been redone, you can see the cloud thing that was running in the background is our new field Volumetric simulation system. One of the biggest changes, I think, overall is the introduction of assets. So assets are a concept where you can take, I’d actually use this, I actually use this myself. So I build I built, it’s really tricky to do Sorry. But if you do this, so this, I built the slides using our assets system. So there’s one node which is a template where I built the sort of layout the slides, and then you expose proxy exposed, like the text and inputs for the images. So all I have to do for each slide is hook videos and change the text, and things just a reusable concept piece of a reusable tool. And if I change the template, all the instances update. And if I saved that template out and gave it to you, you could use the same template, you just drag it here and you just see a note that you change. This is a really big concept. So we have a library was sharing. And there was users to share Content. And we provide Content. So instead of having to know, you can build things just from these individual notes, which actually contain a whole ton of more complicated nodes. It’s a really big change. The notes never crashed. And this has never I’ve never I’ve never seen not to crash. Like shut my eyes a lot.

Scott Millar  48:55

Scratch. Just a quick one on the back into the talk about the path tracing in real time and live. What are the challenges at the moment? Is this hardware? Is it software? Is it user knowledge? Like what what what kind of timescales are like roadblocks? Do you see

Matt Swoboda  49:13

the biggest problem is the speed. So the biggest problem is going to be like splitting is going to be how we do really good network distribution of renders in ideally, and low latency. Because you want to be able to you want to be linear by more kit, right? You want to say I want it to go faster or look better, I’ll buy another box, don’t want to have to read all your Content, or we need this if things linearly scale. That’s the impact it has. And hopefully those boxes Yeah, you can just plug another one into the rack and you get more. But this needs to work in distribution terms. A lot of the time, the rendering technologies will there is just a distribution that needs to work. And it might be you know, we have to figure out what happens with the noising like that doesn’t split quite so well. And it’s not that stable anyway when you do You look at a wobbly video that was on screen. That’s the NVIDIA one I think there are that’s another area of research that’s moving really quickly say, wait a year, it’s probably gonna be better. And another one floating over there somewhere.

Josue Ibañez  50:18

Okay, I have two quick questions. One is, when is the next NotchCon?

Matt Swoboda  50:25

We don’t, we don’t need to do much anymore because we have a framework. Do you know how hard it is to organize it? You know how hard it is to organize it. Like, that took three months of the entire company. Like nothing happened apart from people doing nautical and it was ridiculous.

Josue Ibañez  50:41

Okay, so my second question is a bit more tricky. And it’s about the price. Is there any changes that you have in considering about price, you know, like, open the range a bit more for artists or because he’s more targeted for a lot of industry, which is great. And just want to know, if there’s any things that you have thought about that?

Matt Swoboda  51:02

There is gonna be some changes. It’s a bit like, tricky how much I can say before it’s announced. But I think one thing we can say is that there’s going to be an indie license. So like a very cheap license is actually usable to do stuff with without watermarks, but is cheap and designed for end users or not no topic projects that aren’t Taylor Swift. So yeah, there will be some change, there will be some improvements, we will make it more accessible. I think we have time for one more. We got to wherever it is.

Soren West  51:35

So my question for you is actually looking back at the past of the demo scene. Is there a tool or philosophy or some type of technique that you wish was still present in this this day and age?

Matt Swoboda  51:51

That’s tricky. I mean, do you love a good copper bar? But I don’t know. I think you have to move with the times. I don’t really know. I don’t really look back and think God, I wish I was still doing that. Like, I think it’s you should always move with the times nostalgia. Is this not necessary in production? You always got to keep moving. So yeah, I’m quite glad that some of that old stuff went away. To be honest. I guess the only thing that I think you can miss is the production. It’s a process of like a coder sitting down and coding stuff to make effects. It’s not how you do it. Everyone use tools and use frameworks and things. Because it’s also very painful to do all your animation by writing code and rerunning it again, from the start every time for four minutes. But like, but yes, there are advantages for this. But there was something nice about sitting there and hacking something together. You don’t really do that anymore. But yeah, in terms of visual things, no. Glad it’s gone. All right. Amazing. You want me to let you leave stage now or do I leave the station? Yeah. Okay. Perfect. Thanks, everyone. For? Maybe Lauren, I can speak while he’s awkwardly Yeah.

Laura Frank  53:09

We can use matt as our backdrop. Yeah, stay right there. You

Matt Swoboda  53:13

can stay the whole time.

Laura Frank  53:16

So well, we’ll let it does my mic on. Or I might need a handheld if I killed the battery again. Oh, maybe you see me?

Matt Swoboda  53:35

Check.

Laura Frank  53:36

Check. You got it. You got it. All right. Thank you guys. Um, yes, thank you. Thank you, everyone. This has been an incredible two days, I’m I’m just wowed by everyone’s generosity and excitement that we’re creating around here. So thank you for joining us. I’m excited for the future framework. And we will always have Matt breaking down as our back.

Matt Swoboda  54:02

Perfect. Glass. It’s clear. Yeah, I think I mean, we’ve all been speaking with each other in the hallways and either drinks last night. I mean, thank you, man. We’ve all been catching up about this. Hopefully you’ve had valuable conversations. But again, I’m going to echo it again, kind of what Ben said last night. What Laura has been speaking about is that this group is only what we make of it. So I really encourage you if this is enjoyable to you to please reach out to us to see how you could help or be involved or make more things like this happen. I think there’s a world that a vision of the campus you’re on now, our home here at Extra studios, part of the vision is to do stuff like this all the time. And I would love there to be a monthly framework meetup or different little talks about all these little things that we talked about. But as Matt just spoke about notch con it takes time and effort and people and all everyone in this room is doing 100 other things. at once, so we’re in a very busy industry. But yeah, they have local chapters and they organize events. Is that something?

Laura Frank  55:13

So Matt is asking about local chapters. We have plans for 2024. Plans, like time cost money, right. So we are looking for ways to regionalize or create chapters for framework so there can be local activity. Tito Sabatini is here. I don’t know if you’ve met him, but he is, he is going to be our experiment in that structure by creating framework Brazil. So what we can achieve right now as volunteers is this annual conference. And we would like to do a lot more Honestly, even if it’s, you know, monthly, every couple of months we get an event. I know we have sponsor partners who are interested in supporting the community speaking the way we like to around projects, either with our Novak victory laps thing, live pixel people and what that represents. We want to bring together the community, with our sponsors to get more messaging out there about the people who do this work, we want to get out from behind the video walls, or the dark rooms, and the all night shifts and tell our stories. Because the essence of this community is we have all these different silos that we operate in. And when we learn each other’s process, we excel in our work, we get new ideas, we learn new approaches about how to succeed or change what we do for the better. This community bringing it together is to make us stronger, ultimately, to make us better partners to our clients, and that they have more awareness of what we do. And then the other piece is of course, bringing in new talent into the community and giving them the experience and mentorship they need to thrive. So that’s regional chapters, that is expanding our sponsor base. That is, I hope, starting professional memberships next year for the community, both individual and studio memberships. As well as what we expect will be a European based annual events in the fall of next year. I envision a future where that we can maybe support two of these internationally, whether it’s we split it up regionally, much like SIGGRAPH does with an Asia event. We would love to be in Dubai, somewhere in the Asia region as well, whether it’s Bangkok, Tokyo, we have community members all over the world. When I checked our stats last night, we have people watching from Africa, we have people watching from all parts of Asia, we are an international community. And I want us to be there talking about those experiences everywhere. So I have big goals don’t have big pockets yet. But we’ll see what we can do.

Matt Swoboda  58:16

It gets pockets and time and gifts of people as well. You know, everyone here has amazing abilities. But, you know, that can be leveraged in the right way, you just have to find that balance of time. So we’re working on it internally, we’d love your thoughts come find us. I think you touched on something as well, which is really interesting with this particular event is that everyone in this room does what we’re doing on stage right now, which is like making a live event. And for my team, I just want to extend a big thank you. And if you can get around applause for other people who are behind the screen. Because you know how difficult that is for all the people in the seats. And they’ve been very patiently watching. But hopefully it’s been valuable for for all of you as well and to meet new people. I think yeah, take what you can from this, some of this will be available online at some points. In some sort of form. We’ll discuss what that looks like.

Laura Frank  59:10

We’ll say quickly to that the stream has been recorded. Anyone who paid for a stream ticket or is here in attendance, you will get the links for that. And in about a month I’ll have everything posted to our site by session. So you can share it, please do.

Matt Swoboda  59:26

Yeah, I think that’s really helpful is that a small thing that all of you can do is getting exposure to other worlds. It feels like weird social media influencer stuff, but a lot of people who are here this time, I’ve noticed a couple friends and community that I’ve met in the last year that are now here and they go and they tell other people and it kind of grows and changes and your map gets created in that way which is really helpful. So it’s not just about getting likes, it’s about actually informing other people and connecting other communities as well. There’s a lot of stuff going on this week for SIGGRAPH. And we have a birds of a feather event.

Laura Frank  59:58

Feather Monday 2pm to 5pm, room 512. We’re going to do basically open mic community open mic, I’ll be talking about framework, you can bring a short deck and share information about projects, your point of view of being a live pixel producer.

Matt Swoboda  1:00:16

So yeah, if you’re not familiar birds of a feather events or events surrounding SIGGRAPH, and SIGGRAPH is this huge conference, but there’s all these little siloed things. So please spread the word on that we have some social media postings about that available, we’d love to see some support there. And it’s a great opportunity to bring someone down the hall from another presentation or something like that into our space. With those other things, as well, like sharing the speakers and talks, we’ve actually had a lot of really fun rebirth of Content from last year, the London talk, if you weren’t there, you should go and look it up and rewatch some of those things. There’s some really amazing information there. And that’s the point of this group is that it’ll continue to grow. And that knowledge changes over time, which is really helpful. Moving into housekeeping a little bit. Thank you so much, everyone, for coming. It’s been super valuable to see everyone in this space. I’m personally very humbled to have everyone here. This space took a lot of work to make, and it’s work community. So it’s great to have you all here for that. We do not have a happy hour here tonight, like we did last night. Apologies, Matt found the beer somewhere. So that’s good. But we will have an unofficial sort of thing for me. I know a lot of people have dinners and other stuff like that. But anyone who does want to continue your journey and share some drinks and hang out. Or if you’ve traveled here, there’s a bar down the street called Harlow within eh, a RLOWE. It’s walkable about 15 minutes, that will be like a nice little central place if people want to grab a drink and hang out. Not everyone will be there for very long, but that’s the spot it is not framework sponsored. So it’s just a place and I’m sure everyone will reach out to each other. I would really encourage you if you haven’t met someone new while we’re leaving or whatever. Introduce yourself, make sure you get contact information for people on this QR codes. It’s really easy to talk to someone and then not know how to get a hold of them later. So use that to connect with each other and use that to coordinate your beverage adventures later if you have them.

Laura Frank  1:02:12

Thanks everyone.

Matt Swoboda  1:02:14

Really appreciate it. Thank you

SUMMARY KEYWORDS

gpu, talk, engines, hardware, video games, good, game, ai, render, bit, creative, people, graphics, real, projects, big, work, tools, live, production

SPEAKERS

Matt Swoboda, Soren West, Josue Ibañez, Scott Millar, Laura Frank