Press "Enter" to skip to content

Custom RenderPipeline 自定义渲染管线

内容纲要

掌控你的渲染

创建一个render pipeline asset 及其一个instance(实例)
渲染一个Camera view
性能culling ,filtering,sorting
区分opaque,transparent,以及独立的passes (invalid passes)
多个相机

这是整个自定义渲染管线教程的第一部分。他包含了从头开始创建一个渲染管线的最主要的骨架内容,之后将不断的进行扩展

这套教程需要你对unity有基础的了解,一些unity的支持参考Object Management; Procedural Grid两套教程

这个教程基于Unity2019.2.6f1


Rendering with a custom render pipeline.

1. 一个新的RenderPipeline

Unity在渲染时,首先需要确定哪些Shapes需要被渲染,在哪里、何时、以及以什么样的设置参数被渲染。这个过程可能非常复杂,因为需要考虑各种因素。灯光,阴影,透明度,Image effects,volumetric effects一起其他所有需要正确依次考虑的因素。这也就是render pipeline做的事情

In the past Unity only supported a few built-in ways to render things. Unity 2018 introduced scriptable render pipelines—RPs for short—making it possible to do whatever we want, while still being able to rely on Unity for fundamental steps like culling. Unity 2018 also added two experimental RPs made with this new approach: the Lightweight RP and the High Definition RP. In Unity 2019 the Lightweight RP is no longer experimental and got rebranded to the Universal RP in Unity 2019.3.

在过去unity只支持很少的几种内置渲染方式。从unity2018开始增加了scriptable render pieline(可编程的渲染管线简称rp),这个功能允许我们以更加灵活的方式来配置渲染管线,不过一些基础的过程像是culling仍然需要依赖unity本身来完成。unity2018同时增加了两个实验性的RP模板:LRP(之后改称为URP)为了满足通用的渲染场景,HDRP为了满足高品质的渲染情形

The Universal RP is destined to replace the current legacy RP as the default. The idea is that it is a one-size-fits-most RP that will also be fairly easy to customize. Rather than customizing that RP this series will create an entire RP from scratch.

URP计划逐步替代现有老的默认渲染管线。它的设计初衷是一个能够尽量满足不同渲染需求,同时能够灵活和简便的自定义和扩展。
不过在本教程中,我们将从头开始创建一个自己的渲染管线

This tutorial lays the foundation with a minimal RP that draws unlit shapes using forward rendering. Once that's working, we can extend our pipeline in later tutorials, adding lighting, shadows, different rendering methods, and more advanced features.

这个教程将最终创建一个小型的RP,这个RP可以渲染unlit shapes,并且使用forword渲染方式。当这个教程完成之后,我们将在之后的教程中进行扩展,加入灯光,阴影,不同的渲染方式,以及更多的高级功能

1.1 项目设置

Create a new 3D project in Unity 2019.2.6 or later. We'll create our own pipeline, so don't select one of the RP project templates. Once the project is open you can go to the package manager and remove all packages that you don't need. We'll only use the Unity UI package in this tutorial to experiment with drawing the UI, so you can keep that one.

在2019.2.6或者更高的版本中创建一个3D项目。我们将创建自己的RP,所以不要选择任何其他的RP项目模板。一旦项目创建完毕,打开package manager 然后移除所有你需要的package。我们将只保留unity ui package,以便在教程中绘制ui

We're going to exclusively work in linear color space, but Unity 2019.2 still uses gamma space as the default. Go to the player settings via Edit / Project Settings and then Player, then switch Color Space under the Other Settings section to Linear.

本项目需要这只为Liear线性色彩空间,不过在2019.2中默认仍然使用gamma色彩空间。在设置的player标签中,编辑色彩空间。将gamma 改为linear


Color space set to linear.

Fill the default scene with a few objects, using a mix of standard, unlit opaque and transparent materials. The Unlit/Transparent shader only works with a texture, so here is a UV sphere map for that.

在场景中随便常见一些简单的物体,材质使用默认的starndard unlie opaque 和 transparent 材质。这个材质仅仅可以使用一张贴图


UV sphere alpha map, on black background.

I put a few cubes in my test scene, all of which are opaque. The red ones use a material with the Standard shader while the green and yellow ones use a material with the Unlit/Color shader. The blue spheres use the Standard shader with Rendering Mode set to Transparent, while the white spheres use the Unlit/Transparent shader.

在测试场景中,我创建了几个不透明的cubes,红色的使用标准材质,绿色和黄色的使用unlite/color shader. 蓝色的sphere 使用透明standard材质。白色的使用unlite/tranparent 材质


Test scene.

1.2 Pipeline Asset | 渲染管线Asset

Currently, Unity uses the default render pipeline. To replace it with a custom render pipeline we first have to create an asset type for it. We'll use roughly the same folder structure that Unity uses for the Universal RP. Create a Custom RP asset folder with a Runtime child folder. Put a new C# script in there for the CustomRenderPipelineAsset type.

到这里,unity 仍然使用了默认的渲染管线。如果要替换成我们自定义的RP,首先我们需要创建一个rp asset。我们将使用URP同样的文件夹结构来创建文件。如下图所示创建一个c#文件命名为CustomRenderPipelineAsset 类型


Folder structure.

The asset type must extend RenderPipelineAsset from the UnityEngine.Rendering namespace.

asset 类型需要继承RenderPipelineAsset, 命名空间需要在UnityEngine.Rendering namespace

using UnityEngine;
using UnityEngine.Rendering;

public class CustomRenderPipelineAsset : RenderPipelineAsset {}

The main purpose of the RP asset is to give Unity a way to get a hold of a pipeline object instance that is responsible for rendering. The asset itself is just a handle and a place to store settings. We don't have any settings yet, so all we have to do is give Unity a way to get our pipeline object instance. That's done by overriding the abstract CreatePipeline method, which should return a RenderPipeline instance. But we haven't defined a custom RP type yet, so begin by returning null.

RP Asset的主要作用,是让unity可以通过控制pipeline object instance来达到不同的渲染效果。rp asset 本身只是一个handle以及存储一些渲染参数。目前我们还没有设置任何参数,所以首先需要返回一个pipeline 实例给到unity。通过override 抽象类 CreatePipeline方法来实现, 这个方法需要返回一个RenderPipeline instance。但是我们还没有定义一个custom rp,暂时先返回一个null。

The CreatePipeline method is defined with the protected access modifier, which means that only the class that defined the method—which is RenderPipelineAsset—and those that extend it can access it.

CreatePipeline 方法使用了protected 修饰,意味着只有在RenderPipelineAsset中定义,或者继承的方法才能访问

    protected override RenderPipeline CreatePipeline () {
        return null;
    }

Now we need to add an asset of this type to our project. To make that possible, add a CreateAssetMenu attribute to CustomRenderPipelineAsset.

然后我们需要为本项目添加一个该类型的asset。可以通过添加CreateAssetMenu 属性来实现

That puts an entry in the Asset / Create menu. Let's be tidy and put it in a Rendering submenu. We do that by setting the menuName property of the attribute to Rendering/Custom Render Pipeline. This property can be set directly after the attribute type, within round brackets.

这样就在Asset / Create 菜单中加入了接口。让我们把他放在Rendering子菜单中。可以通过下面的属性代码实现

[CreateAssetMenu(menuName = "Rendering/Custom Render Pipeline")]
public class CustomRenderPipelineAsset : RenderPipelineAsset { … }

Use the new menu item to add the asset to the project, then go to the Graphics project settings and select it under Scriptable Render Pipeline Settings.

使用新的菜单选项来为项目添加rp asset,然后在项目的graphics设置标签中,选择并设置

Custom RP selected.

Replacing the default RP changed a few things. First, a lot of options have disappeared from the graphics settings, which is mentioned in an info panel. Second, we've disabled the default RP without providing a valid replacement, so nothing gets rendered anymore. The game window, scene window, and material previews are no longer functional. If you open the frame debugger—via Window / Analysis / Frame Debugger—and enable it, you will see that indeed nothing gets drawn in the game window.

替换默认的rp需要注意的情况。首先你会发现很多选项都从graphics设置中消失了。然后因为我们用新的RP ASSET 替换了默认的rp asset,暂时无法渲染出任何物体,而且材质的预览也会显示异常。如果你打开grame deugger(window/ analysis/ frame debugger)就会发现在game 窗口中没有任何物体被drawn

1.3 Render Pipeline Instance 渲染管线实例

Create a CustomRenderPipeline class and put its script file in the same folder as CustomRenderPipelineAsset. This will be the type used for the RP instance that our asset returns, thus it must extend RenderPipeline.

创建一个CustomRenderPipeline 类 然后把他与CustomRenderPipelineAsset放在同一个文件夹中。这个就是我们的Asset用来返回RP instance的类型,之后我们将在这个类中扩展RP

using UnityEngine;
using UnityEngine.Rendering;

public class CustomRenderPipeline : RenderPipeline {}

RenderPipeline defines a protected abstract Render method that we have to override to create a concrete pipeline. It has two parameters: a ScriptableRenderContext and a Camera array. Leave the method empty for now.

RenderPipeline 定义了一个protected abstract 方法 method,这个方法用来构成pipeline. 它有两个参数:
一个 ScriptableRenderContext 以及一个Camera array。此方法可以暂时为空

    protected override void Render (
        ScriptableRenderContext context, Camera[] cameras
    ) {}

Make CustomRenderPipelineAsset.CreatePipeline return a new instance of CustomRenderPipeline. That will get us a valid and functional pipeline, although it doesn't render anything yet.

修改CustomRenderPipelineAsset.CreatePipeline 返回一个 CustomRenderPipeline实例。这将返回一个合法以及具有实际功能的pipeline,虽然目前还无法渲染出任何物体。

    protected override RenderPipeline CreatePipeline () {
        return new CustomRenderPipeline();
    }

2. Rendering 渲染

Each frame Unity invokes Render on the RP instance. It passes along a context struct that provides a connection to the native engine, which we can use for rendering. It also passes an array of cameras, as there can be multiple active cameras in the scene. It is the RP's responsibility to render all those cameras in the order that they are provided.

Unity 每个frame都将调用RP instance 中的Render方法。它传递一个用来做渲染的context struct给到引擎。此外它还传递一个camera 数组,其中记录了场景中所有被激活需要渲染的Camera。RP的任务就是依次渲染在context 中依次渲染这些camera。

2.1 Camrea Renderer | 相机渲染

Each camera gets rendered independently. So rather than have CustomRenderPipeline render all camera's we'll forward that responsibility to a new class dedicated to rendering one camera. Name it CameraRenderer and give it a public Render method with a context and a camera parameter. Let's store these parameters in fields for convenience.

每一个camera 都是独立渲染的。所以我们这里并不会让CustomRenderPipeline一次性渲染所有的camera,而是将任务传递给一个新的类进行处理。我们把它命名为CameraRenderer,这个类有一个public Render方法,同样需要传递一个context和一个相机参数。为了方便起见,我们把这些参数存放到fields中

using UnityEngine;
using UnityEngine.Rendering;

public class CameraRenderer {

    ScriptableRenderContext context;

    Camera camera;

    public void Render (ScriptableRenderContext context, Camera camera) {
        this.context = context;
        this.camera = camera;
    }
}

Have CustomRenderPipeline create an instance of the renderer when it gets created, then use it to render all cameras in a loop.

在CustomRendererPipeline 中,实例化camera renderer,然后依次进行渲染

    CameraRenderer renderer = new CameraRenderer();

    protected override void Render (
        ScriptableRenderContext context, Camera[] cameras
    ) {
        foreach (Camera camera in cameras) {
            renderer.Render(context, camera);
        }
    }

Our camera renderer is roughly equivalent to the scriptable renderers of the Universal RP. This approach will make it simple to support different rendering approaches per camera in the future, for example one for the first-person view and one for a 3D map overlay, or forward vs. deferred rendering. But for now we'll render all cameras the same way.

我们的camera renderer 与URP有一些相近。这个方式将有助于在之后扩展每个camera的渲染,例如一个camera用来渲染第一人视角,另一个cam用来渲染整个3D地图,或者一个用forward,另一个用defered。但是目前,我们暂时用同样的方式渲染所有cam。

2.2 Drawing the skybox | 绘制天空

The job of CameraRenderer.Render is to draw all geometry that its camera can see. Isolate that specific task in a separate DrawVisibleGeometry method for clarity. We'll begin by having it draw the default the skybox, which can be done by invoking DrawSkybox on the context with the camera as an argument.

CameraRenderer.Render的人物是绘制所有camera可以看到的物体。我们将特定的任务在单独的DrawVisibleGeometry方法中实现。首先让它绘制默认的skybox,这可以通过以相机作为参数在context上调用DrawSkybox来完成。

    public void Render (ScriptableRenderContext context, Camera camera) {
        this.context = context;
        this.camera = camera;

        DrawVisibleGeometry();
    }

    void DrawVisibleGeometry () {
        context.DrawSkybox(camera);
    }

This does not yet make the skybox appear. That's because the commands that we issue to the context are buffered. We have to submit the queued work for execution, by invoking Submit on the context. Let's do this in a separate Submit method, invoked after DrawVisibleGeometry.

目前skybox并不会显示出来。这是因为目前context所执行的命令是缓冲的buffered。我们需要将其放到任务序列中queued word,依次执行。我们需要在DrawVisibleGeometry方法中调用submit 方法来实现。

    public void Render (ScriptableRenderContext context, Camera camera) {
        this.context = context;
        this.camera = camera;

        DrawVisibleGeometry();
        Submit();
    }

    void Submit () {
        context.Submit();
    }

The skybox finally appears in both the game and scene window. You can also see an entry for it in the frame debugger when you enable it. It's listed as Camera.RenderSkybox, which has a single Draw Mesh item under it, which represents the actual draw call. This corresponds to the rendering of the game window. The frame debugger doesn't report drawing in other windows.
最终skybox正确显示在了game 和 scene窗口中。同时在frame debugger中,你也可以看到其相关信息。



Skybox gets drawn.

Note that the orientation of the camera currently doesn't affect how the skybox gets rendered. We pass the camera to DrawSkybox, but that's only used to determine whether the skybox should be drawn at all, which is controlled via the camera's clear flags.

需要注意的是目前camrea的方向并不能影响skybox的绘制。我们目前只是将camera传递给了DrawSkybox,还需要通过camera clear flags来控制camera

To correctly render the skybox—and the entire scene—we have to set up the view-projection matrix. This transformation matrix combines the camera's position and orientation—the view matrix—with the camera's perspective or orthographic projection—the projection matrix. It is known in shaders as unity_MatrixVP, one of the shader properties used when geometry gets drawn. You can inspect this matrix in the frame debugger's ShaderProperties section when a draw call is selected.

为了能够正确的渲染出skybox以及整个场景,我们需要设置view-projection matrxi。这个transformation matrix包含了camera positon位置、朝向orientation——同时包含了camera的透视矩阵(the view matrix),或者正交投射矩阵(the projection matrix)。通常在shader中可以通过unity_MatrixVP来直接调用。你可以通过在frame debugger 的shaderproperties标签中选择draw call 来查看

At the moment, the unity_MatrixVP matrix is always the same. We have to apply the camera's properties to the context, via the SetupCameraProperties method. That sets up the matrix as well as some other properties. Do this before invoking DrawVisibleGeometry, in a separate Setup method.

不难发现,目前unity_matrixVP 矩阵的值一直没有变化。我们需要通过SetupCameraProperties方法,把camera的属性放在context中。这就可以再渲染之前设置好这个matrix或者其他一些参数。我们需要在DrawVisibleGeometry之前通过Setup方法来实现。

    public void Render (ScriptableRenderContext context, Camera camera) {
        this.context = context;
        this.camera = camera;

        Setup();
        DrawVisibleGeometry();
        Submit();
    }

    void Setup () {
        context.SetupCameraProperties(camera);
    }


Skybox, correctly aligned.

2.3 Command Buffers | 命令缓冲

The context delays the actual rendering until we submit it. Before that, we configure it and add commands to it for later execution. Some tasks—like drawing the skybox—can be issued via a dedicated method, but other commands have to be issued indirectly, via a separate command buffer. We need such a buffer to draw the other geometry in the scene.

To get a buffer we have to create a new CommandBuffer object instance. We need only one buffer, so create one by default for CameraRenderer and store a reference to it in a field. Also give the buffer a name so we can recognize it in the frame debugger. Render Camera will do.

直到我们submit context的时候,真正的渲染才会执行。在此之前,我们对其进行配置,并向其添加命令以供稍后执行。一些任务——比如绘制天空盒——可以通过专用的方法来执行,但是其他的命令必须通过一个单独的命令缓冲区来间接执行。我们需要这样的缓冲来绘制场景中的其他几何体。

要获得缓冲区,我们必须创建一个新的CommandBuffer对象实例。我们只需要一个缓冲区,因此在默认情况下为CameraRenderer创建一个缓冲区,并在字段中存储对它的引用。为了在frame deugger中能够区别buffer,我们需要给他起个名字。这里命名为“Render Camera”。

    const string bufferName = "Render Camera";

    CommandBuffer buffer = new CommandBuffer {
        name = bufferName
    };
这个对象如何初始化
How does that object initializer syntax work?
It's as if we've written buffer.name = bufferName; as a separate statement after invoking the constructor. But when creating a new object, you can append a code block to the constructor's invocation. Then you can set the object's fields and properties in the block without having to reference the object instance explicitly. It makes explicit that the instances should only be used after those fields and properties have been set. Besides that, it makes initialization possible where only a single statement is allowed—for example a field initialization, which we're using here—without requiring constructors with many parameter variants.

Note that we omitted the empty parameter list of the constructor invocation, which is allowed when object initializer syntax is used.

We can use command buffers to inject profiler samples, which will show up both in the profiler and the frame debugger. This is done by invoking BeginSample and EndSample at the appropriate points, which is at the beginning of Setup and Submit in our case. Both methods must be provided with the same sample name, for which we'll use the buffer's name.

我们需要将使用command buffers来引入到profiler samples,这样它才会在profiler和framedebugger中显示。需要通过BeginSample和EndSample来完成的,这个调用需要放在Setup 和 Submit之前,在我们的例子中。这两个方法必须具有相同的sample name,在这里我们使用buffer的名字。

    void Setup () {
        buffer.BeginSample(bufferName);
        context.SetupCameraProperties(camera);
    }

    void Submit () {
        buffer.EndSample(bufferName);
        context.Submit();
    }
To execute the buffer, invoke ExecuteCommandBuffer on the context with the buffer as an argument. That copies the commands from the buffer but doesn't clear it, we have to do that explicitly afterwards if we want to reuse it. Because execution and clearing is always done together it's handy to add a method that does both.

通过ExecuteCommandBuffer来运行buffer(需要context来传递参数),如果我们想要重用它,我们必须要做清理。因为执行和清除总是一起完成的,所以添加一个同时执行和清除的方法是很方便的。

    void Setup () {
        buffer.BeginSample(bufferName);
        ExecuteBuffer();
        context.SetupCameraProperties(camera);
    }

    void Submit () {
        buffer.EndSample(bufferName);
        ExecuteBuffer();
        context.Submit();
    }

    void ExecuteBuffer () {
        context.ExecuteCommandBuffer(buffer);
        buffer.Clear();
    }

The Camera.RenderSkyBox sample now gets nested inside Render Camera.
至此,Camera.RenderSkyBox sample被内嵌进了RenderCamera中

Render camera sample.

2.4 Clearing the Render Target | 清理Render Target

Whatever we draw ends up getting rendered to the camera's render target, which is the frame buffer by default but could also be a render texture. Whatever was drawn to that target earlier is still there, which could interfere with the image that we are rendering now. To guarantee proper rendering we have to clear the render target to get rid of its old contents. That's done by invoking ClearRenderTarget on the command buffer, which belongs in the Setup method.

CommandBuffer.ClearRenderTarget requires at least three arguments. The first two indicate whether the depth and color data should be cleared, which is true for both. The third argument is the color used to clearing, for which we'll use Color.clear.

无论我们绘制什么,都会渲染到camera的render target,这个render target可以使frame buffer也是可以render texture. 而早先绘制的图像仍然占据了buffer,所以为了得到正确的渲染结果,我们需要首先清除掉老的内容。这个过程通过在Setup中引入ClearRenderTarget方法实现

CommandBuffer.ClearRenderTarget 需要至少三个参数。前两个参数用来控制depth、colordata是否需要被清理。第三个参数是用于清理的颜色(??)

    void Setup () {
        buffer.BeginSample(bufferName);
        buffer.ClearRenderTarget(true, true, Color.clear);
        ExecuteBuffer();
        context.SetupCameraProperties(camera);
    }


Clearing, with nested sample.

The frame debugger now shows a Draw GL entry for the clear action, which shows up nested in an additional level of Render Camera. That happens because ClearRenderTarget wraps the clearing in a sample with the command buffer's name. We can get rid of the redundant nesting by clearing before beginning our own sample. That results in two adjacent Render Camera sample scopes, which get merged.

如图所示frame debugger中显示一个Draw GL的选项,并且该选项内嵌在一个新的Render Camera 层级。造成多一个Render Camera 层级是因为ClearRenderTarget 中包裹了一个command buffer名字的sample。我们需要在下一步之前首先处理掉这个内嵌的多余的Render Camera层级。我们需要把Clear放在Sample之前。

    void Setup () {
        buffer.ClearRenderTarget(true, true, Color.clear);
        buffer.BeginSample(bufferName);
        //buffer.ClearRenderTarget(true, true, Color.clear);
        ExecuteBuffer();
        context.SetupCameraProperties(camera);
    }


Clearing, without nesting.

The Draw GL entry represent drawing a full-screen quad with the Hidden/InternalClear shader that writes to the render target, which isn't the most efficient way to clear it. This approach is used because we're clearing before setting up the camera properties. If we swap the order of those two steps we get the quick way to clear.

Draw GL条目表示绘制一个全屏四边形,使用Hidden/InternalClear着色器写入渲染目标,这不是最有效的清除方法。之所以使用这种方法,是因为我们在设置相机属性之前要清除。如果我们交换这两个步骤的顺序,我们就得到了快速清除的方法。


正确的clearing

Now we see Clear (color+Z+stencil), which indicates that both the color and depth buffers get cleared. Z represents the depth buffer and the stencil data is part the same buffer.

现在我们看到Clear (color+Z+stencil),这表示颜色和深度缓冲区都被清除。Z表示深度缓冲区,stencil 是同一缓冲区的一部分。

2.5 Culling | 剔除

We're currently seeing the skybox, but not any of the objects that we put in the scene. Rather than drawing every object, we're only going to render those that are visible to the camera. We do that by starting with all objects with renderer components in the scene and then culling those that fall outside of the view frustum of the camera.

Figuring out what can be culled requires us to keep track of multiple camera settings and matrices, for which we can use the ScriptableCullingParameters struct. Instead of filling it ourselves, we can invoke TryGetCullingParameters on the camera. It returns whether the parameters could be successfully retrieved, as it might fail for degenerate camera settings. To get hold of the parameter data we have to supply it as an output argument, by writing out in front of it. Do this in a separate Cull method that returns either success or failure.

我们现在只看到的是天空盒,但是没有看到我们放在场景中的其他物体。下一步而,我们将只渲染那些对相机可见的物体,而不是绘制每个对象。我们从场景中所有带有渲染器组件的对象开始,然后剔除那些位于摄像机视图截锥之外的对象。

弄清楚什么可以被剔除需要我们跟踪多个相机设置和矩阵,为此我们可以使用ScriptableCullingParameters struct。我们可以调用相机上的TryGetCullingParameters,而不是自己手动填充。如果返回值为true,则代表参数被成功接收,fail表示可能degenerate camera settings。为了获得参数数据,我们必须在它前面加一个输出参数。在一个单独的返回bool的方法Cull中执行此操作。

    bool Cull () {
        ScriptableCullingParameters p
        if (camera.TryGetCullingParameters(out p)) {
            return true;
        }
        return false;
    }
Why do we have to write out?| 为什么我们需要添加输出参数out
When a struct parameter is defined as an output parameter it acts like an object reference, pointing to the place on the memory stack where the argument resides. When the method changes the parameter it affects that value, not a copy.

The out keyword tells us that the method is responsible for correctly setting the parameter, replacing the previous value.

Try-get methods are a common way to both indicate success or failure and produce a result.

当struct形参被定义为输出形参时,它的作用类似于对象引用,指向实参在内存堆栈中的驻留位置。当方法更改参数时,它会影响该值,而不是副本。
out关键字告诉我们,该方法负责正确地设置参数,替换以前的值。
Try-get方法是指示成功或失败以及产生结果的常用方法。

一个方法有多个返回值时,返回值类型相同可以返回一个数组 的情况下,使用out参数

It is possible to inline the variable declaration inside the argument list when used as an output argument, so let's do that.

当然,也可以把函数改成inline的形式

    bool Cull () {
        //ScriptableCullingParameters p
        if (camera.TryGetCullingParameters(out ScriptableCullingParameters p)) {
            return true;
        }
        return false;
    }

在Render方法的Setup之前引入Cull,如果Cull错误的话Abord

    public void Render (ScriptableRenderContext context, Camera camera) {
        this.context = context;
        this.camera = camera;

        if (!Cull()) {
            return;
        }

        Setup();
        DrawVisibleGeometry();
        Submit();
    }
Actual culling is done by invoking Cull on the context, which produces a CullingResults struct. Do this in Cull if successful and store the results in a field. In this case we have to pass the culling parameters as a reference argument, by writing ref in front of it.

实际的剔除是通过在上下文上调用Cull来完成的,这会产生一个CullingResults结构体。如果成功,在Cull中执行此操作,并将结果存储在字段中。在这种情况下,我们必须将culling 参数作为一个引用参数传递,使用ref关键词来调用。

    CullingResults cullingResults;

    …

    bool Cull () {
        if (camera.TryGetCullingParameters(out ScriptableCullingParameters p)) {
            cullingResults = context.Cull(ref p);
            return true;
        }
        return false;
    }
关于ref

https://www.cnblogs.com/yyhxqx/p/4792292.html
C# 中的数据有两种类型:引用类型(reference types)和值类型(value types)。 简单类型(包括int, long, double等)和结构(structs)都是值类型,而其他的类都是引用类型。 简单类型在传值的时候会做复制操作,而引用类型只是传递引用,就像 C++ 中的指针一样。
注意 structs 在 C# 和 C++ 中的区别。在 C++ 中, structs 和类基本相同(except that the default inheritance and default access are public rather than private)。 而在 C# 中,structs 和类有很大的区别。其中最大的区别(我个人觉得,同时也是容易忽略的一个地方)可能就是它是值类型,而不是引用类型。

The ref keyword works just like out, except that the method is not required to assign something to it. Whoever invokes the method is responsible for properly initializing the value first. So it can be used for input and optionally for output.

In this case ref is used as an optimization, to prevent passing a copy of the ScriptableCullingParameters struct, which is quite large. It being a struct instead of an object is another optimization, to prevent memory allocations.

2.6 Drawing Geometry | 绘制几何体

Once we know what is visible we can move on to rendering those things. That is done by invoking DrawRenderers on the context with the culling results as an argument, telling it which renderers to use. Besides that, we have to supply drawing settings and filtering settings. Both are structs—DrawingSettings and FilteringSettings—for which we'll initially use their default constructors. Both have to be passed by reference. Do this in DrawVisibleGeometry, before drawing the skybox.

一旦我们知道得到了可见的物体,我们就可以继续推进渲染。通过在Context上调用DrawRenderers来实现的,其中culling results作为该方法的参数。除此之外,我们必须提供drawing setting 和filtering setting。它们都是structs-DrawingSettings和filteringsettings -我们将首先使用它们的默认构造函数。两者都必须通过引用传递。在绘制天空盒之前,先在DrawVisibleGeometry中做这些。

    void DrawVisibleGeometry () {
        var drawingSettings = new DrawingSettings();
        var filteringSettings = new FilteringSettings();

        context.DrawRenderers(
            cullingResults, ref drawingSettings, ref filteringSettings
        );

        context.DrawSkybox(camera);
    }
We don't see anything yet because we also have to indicate which kind of shader passes are allowed. As we only support unlit shaders in this tutorial we have to fetch the shader tag ID for the SRPDefaultUnlit pass, which we can do once and cache it in a static field.

我们还没有看到任何东西,因为我们还必须指出哪种shader passes是允许的。因为在本教程中我们只支持unlit着色器,所以我们必须获取SRPDefaultUnlit通道的shader tag ID,我们可以一次获取并缓存到一个静态字段中。

    static ShaderTagId unlitShaderTagId = new ShaderTagId("SRPDefaultUnlit");
Provide it as the first argument of the DrawingSettings constructor, along with a new SortingSettings struct value. Pass the camera to the constructor of the sorting settings, as it's used to determine whether orthographic or distance-based sorting applies.

提供它作为DrawingSettings构造函数的第一个参数,以及一个新的SortingSettings结构值。将相机传递给排序设置的构造函数,因为它用于确定是应用正投影排序还是基于距离的排序。

    void DrawVisibleGeometry () {
        var sortingSettings = new SortingSettings(camera);
        var drawingSettings = new DrawingSettings(
            unlitShaderTagId, sortingSettings
        );
        …
    }
Besides that we also have to indicate which render queues are allowed. Pass RenderQueueRange.all to the FilteringSettings constructor so we include everything.

除此之外,我们还必须指出哪些渲染队列是允许的。通过RenderQueueRange。所有的内容都添加到FilteringSettings构造函数中,这样我们就可以包含所有内容。

        var filteringSettings = new FilteringSettings(RenderQueueRange.all);



绘制unlit geometry

Only the visible objects that use the unlit shader get drawn. All the draw calls are listed in the frame debugger, grouped under RenderLoop.Draw. There's something weird going on with transparent objects, but let's first look at the order in which the objects are drawn. That's shown by the frame debugger and you can step through the draw calls by selecting one after the other or using the arrow keys.

只有使用unlit着色器的可见对象才会被绘制。所有绘制调用都列在帧调试器中,分组在RenderLoop.Draw下。透明物体有一些奇怪的地方,但让我们先看看物体绘制的顺序。这是由帧调试器显示的,你可以一步一步通过选择一个后另一个或使用箭头键绘制调用。

https://gfycat.com/bothanyindianskimmer
Stepping through the frame debugger.

The drawing order is haphazard. We can force a specific draw order by setting the criteria property of the sorting settings. Let's use SortingCriteria.CommonOpaque.

目前绘制的顺序是随意的。我们可以通过设置sorting setting的属性强制一个特定的绘制顺序。让我们用SortingCriteria.CommonOpaque。

    var sortingSettings = new SortingSettings(camera) {
            criteria = SortingCriteria.CommonOpaque
        };

https://gfycat.com/@catlikecoding


Common opaque sorting. | 常规透明排序

Objects now get more-or-less drawn front-to-back, which is ideal for opaque objects. If something ends up drawn behind something else its hidden fragments can be skipped, which speeds up rendering. The common opaque sorting option also takes some other criteria into consideration, including the render queue and materials.

对象现在得到或多或少的正面到背面绘制,这是理想的不透明对象。如果某样东西最终被拖到其他东西后面,它隐藏的片段就可以被跳过,从而加快渲染速度。这个common opaque sroting选项还计算了其他一些因素,包括render queue和材质。

2.7 Drawing Opaque and Transparent Geometry Separately | 透明和非透明物体分别绘制

The frame debugger shows us that transparent objects get drawn, but the skybox gets drawn over everything that doesn't end up in front of an opaque object. The skybox gets drawn after the opaque geometry so all its hidden fragments can get skipped, but it overwrites transparent geometry. That happens because transparent shaders do not write to the depth buffer. They don't hide whatever's behind them, because we can see through them. The solution is to first drawn opaque objects, then the skybox, and only then transparent objects.

frame debugger中显示透明的物体确实绘制,但是天空盒被绘制在所有不透明物体前面的物体上。天空盒是在不透明几何体之后绘制的,所以所有隐藏的fragements都可以跳过,但它同时覆盖了透明几何体。这是因为透明着色器不写入深度缓冲区。他们不会隐藏背后的东西,因为我们能看穿他们。解决方案是先画不透明的物体,然后是天空盒,然后是透明的物体。

We can eliminate the transparent objects from the initial DrawRenderers invocation by switching to RenderQueueRange.opaque.

通过切换到RenderQueueRange.opaque,我们可以从初始的DrawRenderers调用中消除透明对象。

Then after drawing the skybox invoke DrawRenderers again. But before doing so change the render queue range to RenderQueueRange.transparent. Also change the sorting criteria to SortingCriteria.CommonTransparent and again set the sorting of the drawing settings. That reverses the draw order of the transparent objects.

然后在绘制天空盒之后再次调用DrawRenderers。但在此之前,请将渲染队列范围更改为RenderQueueRange.transparent。还要将排序条件更改为SortingCriteria.CommonTransparent和再次设置排序的绘图设置。这颠倒了透明对象的绘制顺序。

Why is the draw order reversed?

As transparent objects do not write to the depth buffer sorting them front-to-back has no performance benefit. But when transparent objects end up visually behind each other they have to be drawn back-to-front to correctly blend.

Unfortunately back-to-front sorting does not guarantee correct blending, because sorting is per-object and only based on the object's position. Intersecting and large transparent objects can still produce incorrect results. This can sometimes be solved by cutting the geometry in smaller parts.

由于透明对象不会写入深度缓冲区,因此从前到后排序对性能没有好处。但是,当透明物体在视觉上彼此后面时,它们必须前后颠倒才能正确地混合。

不幸的是,前后排序不能保证正确的混合,因为排序是针对每个对象的,并且只基于对象的位置。交叉和大的透明物体仍然可能产生不正确的结果。**这有时可以通过将几何形状切割成更小的零件来解决。**

3. Editor Rendering | 编辑器渲染

Our RP correctly draws unlit objects, but there are a few things that we can do to improve the experience of working with it in the Unity editor.

我们的RP可以正确地绘制unlit物体,但我们还可以做一些事情来改善在Unity编辑器中使用它的体验。

3.1 Drawing Legacy Shaders

Because our pipeline only supports unlit shaders passes, objects that use different passes are not rendered, making them invisible. While this is correct, it hides the fact that some objects in the scene use the wrong shader. So let's render them anyway, but separately.

If someone were to start with a default Unity project and later switch to our RP then they might have objects with the wrong shader in their scenes. To cover all Unity's default shaders we have to use shaders tag IDs for the Always, ForwardBase, PrepassBase, Vertex, VertexLMRGBM, and VertexLM passes. Keep track of these in a static array.

因为我们的管道只支持unlit shaders pass,使用不同pass的物体不会被渲染。虽然这是正确的,一些shader错误的物体也无法识别。所以让我们需要分别渲染它们。

如果有人从一个默认的Unity项目开始,然后切换到我们的RP,那么他们的场景中可能会出现带有错误着色器的对象。为了覆盖所有Unity的默认shader,我们必须为Always, ForwardBase, PrepassBase, Vertex, VertexLMRGBM和VertexLM pass使用shader tagid。通过静态数组中跟踪它们。

    static ShaderTagId[] legacyShaderTagIds = {
        new ShaderTagId("Always"),
        new ShaderTagId("ForwardBase"),
        new ShaderTagId("PrepassBase"),
        new ShaderTagId("Vertex"),
        new ShaderTagId("VertexLMRGBM"),
        new ShaderTagId("VertexLM")
    };
Draw all unsupported shaders in a separate method after the visible geometry, starting with just the first pass. As these are invalid passes the results will be wrong anyway so we don't care about the other settings. We can get default filtering settings via the FilteringSettings.defaultValue property.

在所有可见的物体之后,再绘制所有不支持的着色器,从第一个通道开始。因为这些都是无效的通过,结果无论如何都会是错误的,所以我们不关心其他设置。我们可以通过FilteringSettings.defaultValue属性获得默认的过滤设置。

    public void Render (ScriptableRenderContext context, Camera camera) {
        …

        Setup();
        DrawVisibleGeometry();
        DrawUnsupportedShaders();
        Submit();
    }

    …

    void DrawUnsupportedShaders () {
        var drawingSettings = new DrawingSettings(
            legacyShaderTagIds[0], new SortingSettings(camera)
        );
        var filteringSettings = FilteringSettings.defaultValue;
        context.DrawRenderers(
            cullingResults, ref drawingSettings, ref filteringSettings
        );
    }
We can draw multiple passes by invoking SetShaderPassName on the drawing settings with a draw order index and tag as arguments. Do this for all passes in the array, starting at the second as we already set the first pass when constructing the drawing settings.

我们可以通过调用SetShaderPassName来绘制多个通道,这里需要使用绘制顺序索引(draw order index)和tag作为参数。因为我们在构建绘图设置时已经设置了第一个通道, 所以从SahderTagId中的第二通道开始执行以上操作。

        var drawingSettings = new DrawingSettings(
            legacyShaderTagIds[0], new SortingSettings(camera)
        );
        for (int i = 1; i < legacyShaderTagIds.Length; i++) {
            drawingSettings.SetShaderPassName(i, legacyShaderTagIds[i]);
        }
笔者到这一步的时候出错,drawmesh中无法正常绘制,具体内容如下:

![3.1debug](https://zhangshaojie.co/wp-content/uploads/2021/03/3.1debug.png)
之前把drawSettings的循环放到了context.DrawRender之后,所以无法对其他材质进行绘制
调整顺序就可以了


Standard shader renders black.

Objects rendered with the standard shader show up, but they're now solid black because our RP hasn't set up the required shader properties for them.

用standard 着色器的物体显示出来了,但是现在它们是纯黑色的,因为我们的RP还没有为它们设置所需的着色器属性。

3.2 Error Material | 显示错误的材质

To clearly indicate which objects use unsupported shaders we'll draw them with Unity's error shader. Construct a new material with that shader as an argument, which we can find by invoking Shader.Find with the Hidden/InternalErrorShader string as an argument. Cache the material via a static field so we won't create a new one each frame. Then assign it to the overrideMaterial property of the drawing settings.

为了清楚地指出哪些对象使用了不支持的着色器,我们将使用Unity的error着色器绘制它们。用这个着色器作为参数构造一个新的材质,通过Shader.Find,使用Hidden/InternalErrorShader作为参数调用。通过一个静态字段缓存材质,这样我们就不会在每一帧创建一个新的。然后将它分配到绘图设置的overoverride material属性。

    static Material errorMaterial;

    …

    void DrawUnsupportedShaders () {
        if (errorMaterial == null) {
            errorMaterial =
                new Material(Shader.Find("Hidden/InternalErrorShader"));
        }
        var drawingSettings = new DrawingSettings(
            legacyShaderTagIds[0], new SortingSettings(camera)
        ) {
            overrideMaterial = errorMaterial
        };
        …
    }


Rendered with magenta error shader.

3.3 Partial Class | 部分类

Drawing invalid objects is useful for development but is not meant for released apps. So let's put all editor-only code for CameraRenderer in a separate partial class file. Begin by duplicating the original CameraRenderer script asset and renaming it to CameraRenderer.Editor.

通常我们只在开发中显示errorShader。所以让我们把CameraRenderer的所有编辑器代码放到一个单独的部分类partail文件中。首先复制原来的CameraRenderer脚本资产,并将其重命名为CameraRenderer. editor。


One class, two script assets.

Then turn the original CameraRenderer into a partial class and remove the tag array, error material, and DrawUnsupportedShaders method from it.
public partial class CameraRenderer { … }

然后将原始的CameraRenderer转换成一个partial类,并从其中删除标签数组、错误材料和DrawUnsupportedShaders方法。

什么是部分类 | partial classes

It's a way to split a class—or struct—definition into multiple parts, stored in different files. The only purpose is to organize code. The typical use case is to keep automatically-generated code separate from manually-written code. As far as the compiler is concerned, it's all part of the same class definition. They were introduced in the Object Management, More Complex Levels tutorial.

它是一种将类或结构定义拆分为多个部分并存储在不同文件中的方法。唯一的目的是组织代码。典型的用例是将自动生成的代码与手工编写的代码分开。就编译器而言,它们都是同一个类定义的一部分。它们是在对象管理和更复杂的关卡教程中引入的。

Clean the other partial class file so it only contains what we removed from the other.

清除另一个部分类文件,使其只包含我们从另一个部分中删除的内容。

using UnityEngine;
using UnityEngine.Rendering;

partial class CameraRenderer {

    static ShaderTagId[] legacyShaderTagIds = { … };

    static Material errorMaterial;

    void DrawUnsupportedShaders () { … }
}
The content of the editor part only needs to exist in the editor, so make it conditional on UNITY_EDITOR.

增加UNITY_EDITOR条件

partial class CameraRenderer {

#if UNITY_EDITOR

    static ShaderTagId[] legacyShaderTagIds = { … }
    };

    static Material errorMaterial;

    void DrawUnsupportedShaders () { … }

#endif
}
However, making a build will fail at this point, because the other part always contains the invocation of DrawUnsupportedShaders, which now only exists while in the editor. To solve this we make that method partial as well. We do that by always declaring the method signature with partial in front of it, similar to an abstract method declaration. We can do that in any part of the class definition, so let's put it in the editor part. The full method declaration must be marked with partial as well.

然而,此时build将失败,因为另外一个partial class是包含DrawUnsupportedShaders的调用,它现在只在编辑器中存在。为了解决这个问题,我们让这个方法也是partial的。在方法前声明partial,这类似类似于抽象方法声明,我们可以在类定义的任何部分中实现这一点,同时完整的方法声明也必须用partial标记。

笔者解释

这里DrawUnsupportedShaders的首次声明是eidtor和run环境都有的,而完整方法的代码则是放在eitor环境中

    partial void DrawUnsupportedShaders ();

#if UNITY_EDITOR

    …

    partial void DrawUnsupportedShaders () { … }

#endif
Compilation for a build now succeeds. The compiler will strip out the invocation of all partial methods that didn't end up with a full declaration.

编译现在成功了。编译器将删除所有未以完整声明结束的分部方法的调用。

Can we make the invalid objects appear in development builds? |我们可以让无效的物体在开发环境中呈现么?

Yes, you can base the conditional compilation on UNITY_EDITOR || DEVELOPMENT_BUILD instead. Then DrawUnsupportedShaders exists in development builds as well and still not in release builds. But I'll consistently limit everything development-related to the editor only in this series.

是的,你可以基于**UNITY_EDITOR || DEVELOPMENT_BUILD**来代替条件编译。然后DrawUnsupportedShaders存在于开发版本中,仍然不在发布版本中。但是,在本系列中,我将始终将所有与开发相关的内容限制在编辑器中。

3.4 Drawing Gizmos | 绘制控制器

Currently our RP doesn't draw gizmos, neither in the scene window nor in the game window if they are enabled there.

当前的RP并没有绘制gizmos

Scene without gizmos.

We can check whether gizmos should be drawn by invoking UnityEditor.Handles.ShouldRenderGizmos. If so, we have to invoke DrawGizmos on the context with the camera as an argument, plus a second argument to indicate which gizmo subset should be drawn. There are two subsets, for before and after image effects. As we don't support image effects at this point we'll invoke both. Do this in a new editor-only DrawGizmos method.

我们可以通过调用unityeitor.handles.shouldrendergizmos来检查gizmos是否应该被绘制。首先, 我们必须在context中用camera作为参数调用DrawGizmos,再加上第二个参数来指示应该绘制哪个gizmo子集。有两个子集,用于前后图像效果。由于我们现在不支持image effects,我们将同时调用这两种效果。在一个新的editor only DrawGizmos方法中完成这一操作。

using UnityEditor;
using UnityEngine;
using UnityEngine.Rendering;

partial class CameraRenderer {

    partial void DrawGizmos ();

    partial void DrawUnsupportedShaders ();

#if UNITY_EDITOR

    …

    partial void DrawGizmos () {
        if (Handles.ShouldRenderGizmos()) {
            context.DrawGizmos(camera, GizmoSubset.PreImageEffects);
            context.DrawGizmos(camera, GizmoSubset.PostImageEffects);
        }
    }

    partial void DrawUnsupportedShaders () { … }

#endif
}
The gizmos should be drawn after everything else.

将DrawGismos加入到Render中


Scene with gizmos.

3.5 Drawing Unity UI | 绘制unity ui

Another thing that requires our attention is Unity's in-game user interface. For example, create a simple UI by adding a button via GameObject / UI / Button. It will show up in the game window, but not the scene window.

另一个需要我们注意的是Unity的游戏内部用户界面。例如,通过GameObject / UI / button添加一个按钮来创建一个简单的UI。它将显示在游戏窗口,而不是场景窗口。

UI button in game window.

The frame debugger shows us that the UI is rendered separately, not by our RP.

frame debugger中显示UI是单独呈现的,而不是通过RP。

Screen-space-camera UI in frame debugger.

At least, that's the case when the Render Mode of the canvas component is set to Screen Space - Overlay, which is the default. Changing it to Screen Space - Camera and using the main camera as its Render Camera will make it part of the transparent geometry.

至少,当画布组件的渲染模式设置为Screen Space - Overlay时是可以适用的。将其更改为屏幕空间-相机,并使用主相机作为其渲染相机将使其成为透明几何体的一部分。

UI invisible in scene window.

The UI always uses the World Space mode when it gets rendered in the scene window, which is why it usually ends up very large. But while we can edit the UI via the scene window it doesn't get drawn.

当UI在场景窗口中渲染时,它总是使用世界空间模式,这就是为什么它通常会变得非常大。但是当我们可以通过场景窗口编辑UI时,它不会被绘制。

UI invisible in scene window.

We have to explicitly add the UI to the world geometry when rendering for the scene window, by invoking ScriptableRenderContext.EmitWorldGeometryForSceneView with the camera as an argument. Do this in a new editor-only PrepareForSceneWindow method. We're rendering with the scene camera when its cameraType property is equal to CameraType.SceneView.

当渲染场景窗口时,我们必须通过调用ScriptableRenderContext来显式地将UI添加到世界几何图形中。以相机作为参数的ScriptableRenderContext.EmitWorldGeometryForSceneView。在一个editor-only的PrepareForSceneWindow方法中执行此操作。当它的cameraType属性等于cameraType.sceneview时,我们用场景摄像机进行渲染。

    partial void PrepareForSceneWindow ();

#if UNITY_EDITOR

    …

    partial void PrepareForSceneWindow () {
        if (camera.cameraType == CameraType.SceneView) {
            ScriptableRenderContext.EmitWorldGeometryForSceneView(camera);
        }
    }

然后在culling之前加入这个方法

        PrepareForSceneWindow();
        if (!Cull()) {
            return;
        }

UI visible in scene window.

Multiple Camera | 多相机渲染

4.1 Two Cameras | 两个相机

Each camera has a Depth value, which is −1 for the default main camera. They get rendered in increasing order of depth. To see this, duplicate the Main Camera, rename it to Secondary Camera, and set its Depth to 0. It's also a good idea to give it another tag, as MainCamera is supposed to be used by only a single camera.

每个摄像头都有一个Depth优先值,默认主摄像头的Depth值为−1。它们按照深度的递增顺序进行渲染。复制主摄像机,重命名为secondary Camera,并将其深度设置为0。给它另一个标签也是一个好主意,因为MainCamera应该只被一个camera使用。

Both cameras grouped in a single sample scope.

The scene now gets rendered twice. The resulting image is still the same because the render target gets cleared in between. The frame debugger shows this, but because adjacent sample scopes with the same name get merged we end up with a single Render Camera scope.

It's clearer if each camera gets its own scope. To make that possible, add an editor-only PrepareBuffer method that makes the buffer's name equal to the camera's.

从 frame debugger 中可以看出,场景现在被渲染两次,结果图像仍然是相同的,因为渲染目标在中间被清除. 但由于合并了同名的相邻示例范围,我们最终只得到了一个Render camera Scope。

如果每个相机都有自己的scope,就可以独立渲染。为了实现这一点,添加一个仅用于编辑器的PrepareBuffer方法,使缓冲区的名称与相机的名称相同。

    partial void PrepareBuffer ();

#if UNITY_EDITOR

    …

    partial void PrepareBuffer () {
        buffer.name = camera.name;
    }

#endif

Separate samples per camera.

4.2 Dealing with Changing Buffer Names | 修改buffer名称

Although the frame debugger now shows a separate sample hierarchy per camera, when we enter play mode Unity's console will get filled with messages warning us that BeginSample and EndSample counts must match. It gets confused because we're using different names for the samples and their buffer. Besides that, we also end up allocating memory each time we access the camera's name property, so we don't want to do that in builds.

To tackle both issues we'll add a SampleName string property. If we're in the editor we set it in PrepareBuffer along with the buffer's name, otherwise it's simply a constant alias for the Render Camera string.

尽管帧调试器现在为每个相机显示了一个单独的层次结构,当我们进入play模式时,Unity将警告我们的BeginSample和EndSample计数必须匹配。因为我们对样品和它们的buffer使用了不同的名称,所以很容易混淆。除此之外,我们还会在每次访问相机的name属性时分配内存,我们不应该在build时这样做。

为了解决这两个问题,我们将添加一个SampleName string属性。如果我们在编辑器中,我们在PrepareBuffer中设置它以及缓冲区的名称,否则它只是渲染相机字符串的一个常量别名。

#if UNITY_EDITOR

    …

    string SampleName { get; set; }

    …

    partial void PrepareBuffer () {
        buffer.name = SampleName = camera.name;
    }

#else

    const string SampleName = bufferName;

#endif
Use SampleName for the sample in Setup and Submit.
    void Setup () {
        context.SetupCameraProperties(camera);
        buffer.ClearRenderTarget(true, true, Color.clear);
        buffer.BeginSample(SampleName);
        ExecuteBuffer();
    }

    void Submit () {
        buffer.EndSample(SampleName);
        ExecuteBuffer();
        context.Submit();
    }
We can see the difference by checking the profiler—opened via Window / Analysis / Profiler—and playing in the editor first. Switch to Hierarchy mode and sort by the GC Alloc column. You'll see an entry for two invocations of GC.Alloc, allocating 100 bytes in total, which is causes by the retrieval of the camera names. Further down you'll see those names show up as samples: Main Camera and Secondary Camera.

我们可以通过检查分析程序(通过Window / Analysis / profiler打开),并先在编辑器中运行,从而看到区别。切换到层次结构模式,并按GC Alloc列进行排序。您将看到两个GC调用的条目。Alloc,共分配100个字节,这是由检索相机名引起的。再往下,您将看到这些名称作为示例显示:主摄像机和副摄像机。


Profiler with separate samples and 100B allocations.

Next, make a build with Development Build and Autoconnect Profiler enabled. Play the build and make sure that the profiler is connected and recording. In this case we don't get the 100 bytes of allocation and we get the single Render Camera sample instead.

接下来,创建一个启用了Development build和Autoconnect Profiler的构建。运行构建并确保分析器已连接并进行记录。在这种情况下,我们没有得到100字节的分配,我们得到的是单个渲染相机样本。


Profiling build.

We can make it clear that we're allocating memory only in the editor and not in builds by wrapping the camera name retrieval in a profiler sample named Editor Only. In this case we need to invoke Profiler.BeginSample and Profiler.EndSample from the UnityEngine.Profiling namespace. Only BeginSample needs to be passed the name.

我们只在editor中分配内存,而不是在build中分配,方法是将摄像机名称检索包装在一个名为editor的剖析器示例中。在这种情况下,我们需要调用UnityEngine.profilling名称空间的Profiler.BeginSample和profilter.EndSample。其中只有BeginSample需要传递名称。

using UnityEditor;
using UnityEngine;
using UnityEngine.Profiling;
using UnityEngine.Rendering;

partial class CameraRenderer {

    …

#if UNITY_EDITOR

    …

    partial void PrepareBuffer () {
        Profiler.BeginSample("Editor Only");
        buffer.name = SampleName = camera.name;
        Profiler.EndSample();
    }

#else

    string SampleName => bufferName;

#endif
}

4.3 Layers | 显示层

Cameras can also be configured to only see things on certain layers. This is done by adjusting their Culling Mask. To see this in action let's move all objects that use the standard shader to the Ignore Raycast layer.

相机也可以配置为只看到某些图层上的东西。这是通过调整camera的Cullin mask来实现的。为了看到实际效果,让我们将所有使用标准着色器的对象移动到忽略光线投射层。

调整main carema culling mask

Culling the Ignore Raycast layer.

调整 secondary camera 的 culling mask

Culling everything but the Ignore Raycast layer.

4.4 clear flags | 清理flags

应用在多相机渲染的情况

关于 clear flags

简单理解就是在每帧渲染之前,清理哪些上一帧的渲染结果需要清理,有几种不同的情况 skybox, color ,depth, nothing

We can combine the results of both cameras by adjusting the clear flags of the second one that gets rendered. They're defined by a CameraClearFlags enum which we can retrieve via the camera's clearFlags property. Do this in Setup before clearing.

我们可以通过调整第二个相机的渲染结果,来合并两个相机的最终结果。可以通过在camera clearFlags属性里面的CameraClearFlags enum来实现。 在Setup之前进行Clear

    void Setup () {
        context.SetupCameraProperties(camera);
        CameraClearFlags flags = camera.clearFlags;
        buffer.ClearRenderTarget(true, true, Color.clear);
        buffer.BeginSample(SampleName);
        ExecuteBuffer();
    }
The CameraClearFlags enum defines four values. From 1 to 4 they are Skybox, Color, Depth, and Nothing. These aren't actually independent flag values but represent a decreasing amount of clearing. The depth buffer has to be cleared in all cases except the last one, so when the flags value is at most Depth.

枚举CameraClearFlags定义了四个值。从1到4,它们分别是天空盒Skybox,颜色Color,深度Depth,Nothing。这些实际上不是独立的flags,而是表示clearing的减少。当flags <= depath buffer的时候,target都应当被清楚(除了nothing保留),所以代码做相应调整。

        buffer.ClearRenderTarget(
            flags <= CameraClearFlags.Depth, true, Color.clear
        );
We only really need to clear the color buffer when flags are set to Color, because in the case of Skybox we end up replacing all previous color data anyway.

我们只有在flag设置为color时清除Color buffer,因为在Skybox的情况下,我们最终会替换所有以前的颜色数据。

        buffer.ClearRenderTarget(
            flags <= CameraClearFlags.Depth,
            flags == CameraClearFlags.Color,
            Color.clear
        );
And if we're clearing to a solid color we have to use the camera's background color. But because we're rendering in linear color space we have to convert that color to linear space, so we end up needing camera.backgroundColor.linear. In all other cases the color doesn't matter, so we can suffice with Color.clear.

最后,修正clear color,如果是flag是保留背景color 则使用camera.backgroundColorl.linear,如果不是则使用color.Clear

        buffer.ClearRenderTarget(
            flags <= CameraClearFlags.Depth,
            flags == CameraClearFlags.Color,
            flags == CameraClearFlags.Color ?
                camera.backgroundColor.linear : Color.clear
        );
Because Main Camera is the first to render, its Clear Flags should be set to either Skybox or Color. When the frame debugger is enabled we always begin with a clear buffer, but this is not guaranteed in general.

The clear flags of Secondary Camera determines how the rendering of both cameras gets combined. In the case of skybox or color the previous results get completely replaced. When only depth is cleared Secondary Camera renders as normal except that it doesn't draw a skybox, so the previous results show up as the background. When nothing gets cleared the depth buffer is retained, so unlit objects end up occluding invalid objects as if they were drawn by the same camera. However, transparent objects drawn by the previous camera have no depth information, so are drawn over, just like the skybox did earlier.

因为主相机是第一个渲染的,它的clear flag应该设置为Skybox或Color。当帧调试器启用时,我们总是从一个清除缓冲区开始。

secondary camera的clear flag决定了两个摄像机的渲染如何合成在一起的。在skybox或color的情况下,之前的结果完全被替换。当只有depth clear的时候,次要摄像机正常呈现,但是它没有绘制天空盒,所以之前的结果显示为背景。当nothing clear时,深度缓冲区会被保留,所以unlit的物体最终会遮挡无效的物体,就好像它们是由同一个相机绘制的一样。同时,之前的相机绘制的透明物体没有深度信息,所以正常的,就像之前的天空盒一样。



从上到下 依次是 clearcolor,depth-only,nothing

By adjusting the camera's Viewport Rect it is also possible to reduce the rendered area to only a fraction of the entire render target. The rest of the render target remains unaffected. In this case clearing happens with the Hidden/InternalClear shader. The stencil buffer is used to limit rendering to the viewport area.

通过调整相机的视口Rect,也可以将渲染区域减少到整个渲染目标的一小部分。渲染目标的其余部分不受影响。在这种情况下,清除使用Hidden/InternalClear着色器。模板缓冲区用于限制呈现到视口区域。

Reduced viewport of secondary camera, clearing color.

Note that rendering more than one camera per frame means culling, setup, sorting, etc. has to be done multiple times as well. Using one camera per unique point of view is typically the most efficient approach.

需要注意的是,多相机一位置需要进行多次culling、setup、sorting等操作。每个独特的视角使用一个摄像机是最有效的方法。

发表评论

您的电子邮箱地址不会被公开。 必填项已用*标注