GetConceptualFaceArea

This function returns the area of a given Conceptual Face. The attributes vertex buffer and index buffer are optional but will improve performance if defined.

Syntax

public const string enginedll = @"engine.dll";

[DllImport(enginedll, EntryPoint = "GetConceptualFaceArea")]
public static extern double GetConceptualFaceArea(Int64 conceptualFace, ref float vertexBuffer, ref Int32 indexBuffer);

[DllImport(enginedll, EntryPoint = "GetConceptualFaceArea")]
public static extern double GetConceptualFaceArea(Int64 conceptualFace, ref float vertexBuffer, ref Int64 indexBuffer);

[DllImport(enginedll, EntryPoint = "GetConceptualFaceArea")]
public static extern double GetConceptualFaceArea(Int64 conceptualFace, ref double vertexBuffer, ref Int32 indexBuffer);

[DllImport(enginedll, EntryPoint = "GetConceptualFaceArea")]
public static extern double GetConceptualFaceArea(Int64 conceptualFace, ref double vertexBuffer, ref Int64 indexBuffer);

[DllImport(enginedll, EntryPoint = "GetConceptualFaceArea")]
public static extern double GetConceptualFaceArea(Int64 conceptualFace, IntPtr vertexBuffer, IntPtr indexBuffer);

public static double GetConceptualFaceArea(Int64 conceptualFace)
        {
            return GetConceptualFaceArea(conceptualFace, IntPtr.Zero, IntPtr.Zero);
        }    

Property conceptualFace

Size: 64 bit / 8 byte (value)
A conceptual face is a face that does not have to be placed within one plane. For example a Cylinder has typically 3 conceptual faces where a Box typically has 6 conceptual faces. In case of a Box the 'normal' face count is equal to the conceptual face count, in case of a Cylinder the 'normal' face count depends on the segmentation, for example in case of 36 segments, it will have 38 faces but still 3 conceptual faces.

Property vertexBuffer

Size: 32 bit / 4 byte (reference)
The array of vertices, this array is allocated by the host application. Depending on SetFormat() the array exists of 32 bit (4 byte) single precision floats or 64 bit (8 byte) double precision floats. Each vertex elements exists of several elemens, i.e. X, Y, Z values, but optionally also nX, nY, nZ, texture coordinates, bitangent / binormal coordinates, colors etc. What is contained is defined by SetFormat() and can be retrieved via GetFormat(). The host application has to make sure enough memory is allocated for the vertexBuffer array.

Property indexBuffer

Size: 32 bit / 4 byte (reference)
The array of indices, this array is allocated by the host application. Depending on SetFormat() the array exists of 32 bit (4 byte) integers or 64 bit (8 byte) integeres.

Example (based on pure API calls)

Here you can find code snippits that show how the API call GetConceptualFaceArea can be used.

using RDF;      //  include at least engine.cs within your solution

...

static void Main(string[] args)
{
    Int64   model = RDF.engine.CreateModel();

    if (model != 0)
    {
        //
        //  Classes
        //
        Int64 classBox = RDF.engine.GetClassByName(model, "Box");

        //
        //  Datatype Properties (attributes)
        //
        Int64   propertyLength = RDF.engine.GetPropertyByName(model, "length"),
                propertyWidth = RDF.engine.GetPropertyByName(model, "width"),
                propertyHeight = RDF.engine.GetPropertyByName(model, "height");

        //
        //  Instances (creating)
        //
        Int64   instanceBox = RDF.engine.CreateInstance(classBox, (string) null);

        double  length = 2.8,
                width = 1.3,
                height = 1.4;

        RDF.engine.SetDatatypeProperty(instanceBox, propertyLength, ref length, 1);
        RDF.engine.SetDatatypeProperty(instanceBox, propertyWidth, ref width, 1);
        RDF.engine.SetDatatypeProperty(instanceBox, propertyHeight, ref height, 1);

        Int64   vertexBufferSize = 0,
                indexBufferSize = 0;
        RDF.engine.CalculateInstance(instanceBox, out vertexBufferSize, out indexBufferSize, (IntPtr) 0);
        if (vertexBufferSize != 0 && indexBufferSize != 0)
        {
            float[] vertexBuffer = new float[6 * vertexBufferSize];
            RDF.engine.UpdateInstanceVertexBuffer(instanceBox, ref vertexBuffer[0]);

            Int32[] indexBuffer = new Int32[indexBufferSize];
            RDF.engine.UpdateInstanceIndexBuffer(instanceBox, ref indexBuffer[0]);

            Int64   conceptualFaceCnt = RDF.engine.GetConceptualFaceCnt(instanceBox);
            for (Int64 index = 0; index < conceptualFaceCnt; index++) {
                Int64   startIndexTriangles = 0,
                        noIndicesTriangles = 0,
                        conceptualFace =
                    RDF.engine.GetConceptualFace(
                            instanceBox, index,
                            out startIndexTriangles, out noIndicesTriangles,
                            (IntPtr) 0, (IntPtr) 0,
                            (IntPtr) 0, (IntPtr) 0,
                            (IntPtr) 0, (IntPtr) 0,
                            (IntPtr) 0, (IntPtr) 0
                        );

                double  conceptualFaceArea =
                            RDF.engine.GetConceptualFaceArea(
                                    conceptualFace,
                                    ref vertexBuffer[0],
                                    ref indexBuffer[0]
                                ),
                        conceptualFacePerimeter =
                            RDF.engine.GetConceptualFacePerimeter(
                                    conceptualFace
                                );
            }
        }

        //
        //  The resulting model can be viewed in 3D-Editor.exe
        //
        RDF.engine.SaveModel(model, "c:\\created\\myFile.bin");
        RDF.engine.CloseModel(model);
    }
}