For my next trick, I am going to show you how to draw a triangle. It turns out there is quite a lot of water that needs to flow underneath the bridge before we can achieve this seemingly simple feat. To start off, I moved everything related to DirectX and graphics into a separate file called Renderer under a new namespace called
Flow.Graphics. This file contains several new types/classes that will help us draw things to our render window. Our new main program is now much shorter. I'll start with the new main method and then delve deeper into each of the new classes in Renderer.
open System
open System.Windows.Forms
open SlimDX
open Flow.Graphics
//---------------------------------------------------------------------------------------
[<STAThread>]
[<EntryPoint>]
let main(args) =
let width, height = 400, 300
let form = new Form(Text = "Blue skies", Width = width, Height = height)
let renderer = new Renderer(form.Handle, width, height)
let triangleVertices = [|
Vector3(-1.0f,0.0f,0.0f);
Vector3(0.0f,1.0f,0.0f);
Vector3(1.0f, 0.0f, 0.0f)|]
let triangle = renderer.CreateDrawable(triangleVertices)
let paint _ =
let eye, lookAt, up =
Vector3(0.0f,0.0f,-5.0f),
Vector3.Zero,
Vector3(0.0f,1.0f,0.0f)
// Look at the origin
let view = Matrix.LookAtLH(eye, lookAt, up)
// Place our triangle at the origin
let world = Matrix.Identity
let fovy, aspect, near, far =
float32(Math.PI)/4.0f,
float32(width)/float32(height),
1.0f,
20.0f
let projection = Matrix.PerspectiveFovLH(fovy, aspect, near, far)
renderer.Draw(triangle, world*view*projection)
do form.Paint.Add(paint)
do Application.Run(form)
for item in ObjectTable.Objects do
item.Dispose()
0
Most of the action happens in Renderer. We ask it to create a drawable object with the supplied vertices. This list represents the three corners of our triangle. During the 'paint' callback we calculate a world, view and projection matrix in order to transform our triangle from world space to screen space. We then ask the renderer to draw our triangle with the given transformation.
type Renderer(windowHandle, width, height) =
...
// Create device,swap chain and rendertarget view as before
...
//Create a viewport with the same size as the window
let viewport =
new Viewport(
Width = float32(width),
Height = float32(height),
MinZ = 0.0f,
MaxZ = 1.0f,
X = 0.0f,
Y = 0.0f)
let deviceContext = device.ImmediateContext
do deviceContext.OutputMerger.SetTargets([|renderTargetView|] )
do deviceContext.Rasterizer.SetViewports([|viewport|])
member this.CreateDrawable(triangle) = new Drawable(device, triangle)
member this.Draw(drawable : Drawable, transform) =
do device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
new Color4(Alpha = 1.0f, Red = 0.0f, Green = 0.0f, Blue = 1.0f))
do drawable.Draw(transform)
do swapChain.Present(0, PresentFlags.None) |> ignore
Now that we are going to draw something in our window we need to tell create a viewport to describe which part of the window to use. We will use the entire window. The draw method looks similiar to our previous 'paint' callback, except that we now actually draw something between clearing the back buffer and switching to the front. This is our triangle object that was created with CreateDrawable(...) Let's look at this class.
type Drawable(device, triangle) =
let geometry = new Geometry(device, triangle)
let material = new Material(device)
let constants = new ShaderConstants(device)
member this.Draw(transform) =
constants.Prepare(transform)
material.Prepare()
geometry.Prepare()
geometry.Draw()
The Drawable class serves as a container for all the components that is needed to draw something to the screen. The Geometry describes what needs to be drawn. All the vertex data resides here. A Material describes what an object's surface should look like. The constants are additional data needed to draw an object. In our example, we need a transformation matrix in order to know where to draw the geometry. This typically changes every frame so we pass in this information in our draw call. Before drawing the geometry, the device context needs to be prepared by telling it which vertex and pixels shaders to use and to copy the constants associated with this object to the GPU. This is done in the prepare methods of our constants, material and geometry objects.
The Material in this example is extremely simple. It creates a pixel shader found in the Simple.hlsl file. When the time comes to prepare the renderer with this material, it sets the pixel shader for the device context to this shader.
type Material(device) =
let pixelShader =
let psByteCode = ShaderBytecode.CompileFromFile(
"Simple.hlsl",
"PixShader",
"ps_4_0",
ShaderFlags.None,
EffectFlags.None)
new PixelShader(device, psByteCode)
member this.Prepare() =
do device.ImmediateContext.PixelShader.Set(pixelShader)
The pixel shader will always draw the object in red. We will create some more interesting materials in the near future...
float4 PixShader(float4 position : SV_POSITION ) : SV_TARGET
{
return float4(1,0,0,1); //RED
}
-To setup our geometry we compile the vertex shader found in the same file as our pixel shader.
-We have to specify an input layout so the GPU knows how to interpret the vertex data. For now we are just using a Vector3 to represent a vertex position. In the future we will expand on this by adding normal and texture coordinates to our vertices.
- Next we create a vertex buffer which will copy our vertices to the GPU. Note that this buffer is immutable. We will not be able to change these vertices again.
- When preparing to draw this geometry, we need to specify the input layout, the primitive topology(triangle list in this case), our vertex buffer and our vertex shader. This could change between different objects so we have to update these states before drawing.
- When drawing we need to tell the renderer how many vertices to draw.
type Geometry(device, vertices:array) =
let vsByteCode = ShaderBytecode.CompileFromFile(
"Simple.hlsl",
"VertShader",
"vs_4_0",
ShaderFlags.None,
EffectFlags.None)
let vertexShader = new VertexShader(device, vsByteCode)
let inputLayout =
let position = new InputElement(
SemanticName = "POSITION",
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData)
new InputLayout(
device,
ShaderSignature.GetInputSignature(vsByteCode),
[|position|])
let vertexSize = sizeof
let vertexBuffer =
let vertexBufferDesc = new BufferDescription(
BindFlags = BindFlags.VertexBuffer,
SizeInBytes = vertexSize*vertices.Length,
Usage = ResourceUsage.Immutable)
new SlimDX.Direct3D11.Buffer(device,
new DataStream(vertices, true, false),
vertexBufferDesc)
let deviceContext = device.ImmediateContext
member this.Prepare() =
do deviceContext.InputAssembler.PrimitiveTopology <-
PrimitiveTopology.TriangleList
do deviceContext.InputAssembler.InputLayout <- inputLayout
do deviceContext.VertexShader.Set(vertexShader)
do deviceContext.InputAssembler.SetVertexBuffers(
0, new VertexBufferBinding(vertexBuffer, vertexSize, 0))
member this.Draw() =
deviceContext.Draw(vertices.Length, 0)
Our vertex shader is also very simple. It simply transforms a vertex from world space to screen space by multiplying with the given transformation matrix.
float4 VertShader(float4 position : POSITION ) : SV_POSITION
{
return mul( position, WorldViewProjection );
}
Finally we have our shader constants. This will be used to update our transformation matrix each frame.
type ShaderConstants(device) =
let vsConstBuffer =
let vsConstSize = sizeof
let vsConstBufferDesc = new BufferDescription(
BindFlags = BindFlags.ConstantBuffer,
SizeInBytes = vsConstSize,
CpuAccessFlags = CpuAccessFlags.Write,
Usage = ResourceUsage.Dynamic)
new SlimDX.Direct3D11.Buffer( device, vsConstBufferDesc )
let deviceContext = device.ImmediateContext
let updateShaderConstants constBuffer sizeInBytes data =
let constData = deviceContext.MapSubresource(
constBuffer,
0,
sizeInBytes,
MapMode.WriteDiscard,
SlimDX.Direct3D11.MapFlags.None)
Marshal.StructureToPtr( data, constData.Data.DataPointer, false )
deviceContext.UnmapSubresource( constBuffer, 0 )
member this.Prepare( worldViewProjection ) =
updateShaderConstants vsConstBuffer sizeof worldViewProjection
do deviceContext.VertexShader.SetConstantBuffers([|vsConstBuffer|], 0, 1)
The buffer is created similar to the VertexBuffer. In this case though we specified that it is dynamic since we want to update this buffer later. When we prepare the shader constants, we copy the new transformation matrix directly to the mapped region reserved for our constant data. The layout needs to exactly map to the layout described in HLSL below. In this case though we only have one element so it is trivial.
cbuffer vsMain
{
row_major matrix WorldViewProjection;
};
Wow, what a lot of work to draw a triangle!The full sourcecode can be found on GoogleCode. I use
Mercurial for version control. Why? Mostly because I have always wanted to experiment with distributed version control and at the moment I have the time to do it. If you are used to SVN and Tortoise, you will find that TortoiseHg is very similiar. So far I am enjoying Mercurial. The main benefit for me at the moment is that I can make local commits. I like to commit often to give myself a safe place to revert to. The code may not be in a state that I want to commit it to the server yet, so I just commit it locally. Even when my internet is down, I can still commit. Later when everything works and I want others to benefit from from my changes, I push all my commits to the server. It is pretty painless.