Well, but I don't wanted to re-implement the Fast Square Root. I wanted understand it and put that fast calculations inside my 3D vector and matrix code. I assumed that this is already well developed in any CPU, and some readings pointed that SSE instructions use that. The graphics processors also use that technique for calculation as I read in some place. I know that GPU is great for heavy calculations (CUDA). But I wanted optimize some calculations in my program, via CPU resources.
I decide to try learn SSE, maybe to use fast square root (that was a entry point), but specially to use all the other goods that SSE offer. I already know from long time SSE, but from the theory to real thing, is not that easy some times. Yes, I know. I should be doing more important thing that rebuild the wheel. There are tons of vector classes on internet. I even believe that some implement their calculations via SSE. I try to believe also that VC and other compilers optimize code to run nicely with SSE. But I also read somewhere that MS VC compilers do not optimize the codes in that good way. So, I wanted test if was possible to a simple mortal programmer to use SSE.
I will here just share some tips about the SSE usage, errors found during compilations, how I solve them, and related things that may help the start usage of SSE. When I talk about SSE, i mean SSE2, SSE3, SSE4. I surely used just SSE1 and 2. I think I didn't used any SSE2 instruction, as they are only extents for variable of the type double. Upper levels of SSE are even more specific yet, to DSP instructions.
MY ENVIROMENT
My classes are basic vector (no template based), 3 and 4 elements. The matrix class are 4x4 elements. All are floats. All my tests where in MS VC 2010. Matrix calculations are heavy, as well as vector. I spend a entire Sunday looking, reading and trying to use the SSE operators. I am not going to cover details about how SSE works. There are tons of sites on net about this. A supply a link to a Book Chapter that help me in some poitns, Microsoft MSDN which help a lot, TuomasTonteri (from Filand I think) which supply nice examples where I start in. Other sites will be linked in text.
STARTING
To start, you need just include in your code:
#include "xmmintrin.h"
#include
- xmmintrin contain instrinsics that help you to generate code without have to write in assembler.
- fvec are Intel helpers, that simplify the use of SSE. I suggest to not try use direct assembler in your code.
Configuration Properties>C/C++>Code generation>Enable Enhanced Instruction Set
This is weird, but i tried to keep it enabled. The VC 2013 spec point that that is the default mode of the compiler, if you not set it. Enabling this, if you can get /clr error, then maybe is because you project OR SOME C FILE have the option enabled. I have to check all my C files, to find that one had this option ON, who caused me many headaches in find from where /clr error was coming.
Also in the same tab, you should enable Struct Member Alignment to 16 byte. However don't spect to this option solve your problems. Documentation stats that this may not work well, and is no guarantee that the data will be aligned. Even if you declare you variables as static :( I could not achieve data alignment using this option. I had to use __declspec( align( 16 ) ) in my class definitions, or the __aligned_malloc to dynamic allocations. I overload the new operator to do that. If they are not aligned, a exception is throw in your code when you call SSE load functions.
BASIC INSTRUCTIONS
The basic steps to use SSE are load the registers with your data, compute what you want and save back to know variables. To load data, you have to use the _mm_load_ps. Again, this instruction need the data to be aligned with the 16 byte boundary. To initial tests I use _mm_loadu_ps, which load unaligned data to SSE register. When the calculations are done, you can store back the data in your variables.
The store option are _mm_store_ps and _mm_storeu_ps (for unaligned data). Be aware that the use of 'u' versions is highly inadvisable, as will reduce the speed of data transfer to SSE registers, making it useless in speed. There are other options of load and store. These are the four I tested. The vec class of Intel provides basic data typed to help in the fill of the variables, so there is not need to use assembly language to fill the SSE registers. It is a level up to use of direct intrinsics.
ABOUT ALIGMENT
The MSDN documentation stat the use of align declaration in from of your class to force correct memory alignments. I was unable to use such initially, as my code compilation shown several errors, mainly in typedef of the main class, saying in function definitions that data was misaligned. Oh good, so many fuzzy errors.
But I then removed the alignment information of the main class, create a new typedef for the class and used there the align definition. See the simplified example below.
// Vector class, who is derived by others, and I cannot force aligment, as this avoid
// that this data be passed as parameter in fucntions (because they have to do to stack)
class vector4f {......};
// A typical derivation I use
typedef COLOR vectro4f;
// Here I made the trick, create a version aligned
typedef __declspec( align( 16 ) ) vector4f vec4;
With the above, I was able to create a derived class aligned in the memory boundary!
I tested the aligment with this simple test
#define IsAligned(address) ((unsigned long)(address) & 15)
return 0 if aligned, other else
This kind of message is common when you try pass your aligned class as parameter to a function. Before a fix as above, my code compilation became full of this:
error C2719: 'vecA': formal parameter with __declspec(align('16')) won't be aligned basic.h
After goggling, I found the problem as a know issue. I even not feel right in calling this a issue. Its more a behavior. A post about this tell why. Several other posts bring the same information about the workaround to solve this. Aligned data can be passed only by reference to other functions. Aligned data cannot be passed as argument. Makes sense to me, as the data is put on stack to be passed to a function cannot be aligned, as stack cannot be aligned on 16 bytes boundaries. Maybe eventually. Well, I will have to do a huge change in all my code to use pointers. I even can't imagine the work to do this on vec3 and all the uses I do with it! But I believe it worth, if the speed gains were good.
MATRICES TESTS
Test of each kind were perform 10000 operations of each kind. The tests were done in VC 2010 in release mode with /O2 (max speed), varying some configurations in VC. All matrices are 4x4. All the times reported are in seconds.
I first tested matrix multiplications in form A=A*B. I test if SSE could improve the performance of my code. I used float multiplications version from hfrt. The time for the test was of 0.000286 seconds. So I unroll the code, as several sites and even Intel suggest to unroll code and put together calls by type, to optimize the use and preemption of code load.
Code
void mmul_sse_unroll(const float * a, const float * b)//, float * r)
{
__m128 s0, x0, x1, x2, x3, r_line;
// unroll the first step of the loop to avoid having to initialize r_line to zero
// carrego apenas uma vez
x0 = _mm_load_ps(a); // a_line = vec4(column(a, 0))
x1 = _mm_load_ps(&a[4]); // a_line = vec4(column(a, j))
x2 = _mm_load_ps(&a[8]); // a_line = vec4(column(a, j))
x3 = _mm_load_ps(&a[12]); // a_line = vec4(column(a, j))
for (int i=0; i<16 br="" i=""> {
s0 = _mm_set1_ps(b[i]); // b_line = vec4(b[i][0])
r_line = _mm_mul_ps(x0, s0); // r_line = a_line * b_line
s0 = _mm_set1_ps(b[i+1]); // b_line = vec4(b[i][j])
r_line = _mm_add_ps(_mm_mul_ps(x1, s0), r_line);
s0 = _mm_set1_ps(b[i+2]); // b_line = vec4(b[i][j])
r_line = _mm_add_ps(_mm_mul_ps(x2, s0), r_line);
s0 = _mm_set1_ps(b[i+3]); // b_line = vec4(b[i][j])
r_line = _mm_add_ps(_mm_mul_ps(x3, s0), r_line);
_mm_store_ps(&m[i], r_line); // r[i] = r_line
}
};16>
This lead to a significant increase in performance, and matrix multiplication runs in 0.000192.
I changed the multiplications to C = A*B (4x4 each) form, as this is a more natural operation. This time I change the SSE option in compiler between ENABLED and DISABLED for tests. All the tests were with the unrolled version of the code. There are two more important operations happening in this versions:
- creation and identity load in intermediate matrix created for calculus of multiplications
- attribution of the intermediate matrix to the C matrix
SSE ENABLED in compiler:
Normal routine: 0.001592
SSE routine: 0.000236 (6,74x faster than in normal in this group)
Normal routine with memset to clear matrix: 0.0018
SSE option DISABLED in compiler (not set):
Normal routine: 0.000595 (weird, code was optimized with SSE here?)
SSE routine: 0.000236 (same as before) (2.52x faster than in normal in this group)
Normal routine with memset to clear matrix: 0.000804 (weird again)
Overall gains between worst and best cases was 7.83x
I clean the intermediate objects of the code, compile and test more than 3 times, to ensure that the correct code was being compiled. During the test of matrix multiplication, was identified that the load identity routine was taking too much time, specially because every time that multiply two matrices, a intermediate matrix is created, to store the result of the multiplication.
MATRIX IDENTITY
The first test used memset option plus 4 attributions, to clear the entire matrix.
In the second, I used a simple for loop to clear the matrix, and 4 attributions, which was much faster.
Identity speed:
Normal: 0.000217
Optimized: 0.000067 (3,23x faster)
I could not use SSE in this, because when the matrix is created, it seens not to be aligned (even with the flag float __declspec(align(16)) m[16]) causing segmention fault. As above stated there was a increase in 0.0002 seconds in memset version.
MATRIX ATTRIBUTION
Other identified source of slowdown was the attribution function of the matrices. The attribution between matrices used initially direct operation = between floats. I converted it to SSE and, as matrix data is aligned, the time to load the matrix reduces. All the above tests used the SSE version.
Attribution speed:
Normal: 0.000074
SSE: 0.000017 (4.35x faster)
Code
// SSE version
matrix4x4f operator= (const matrix4x4f &aa)
{
__m128 a, b, c, d;
a = _mm_load_ps(&aa.m[0]);
b = _mm_load_ps(&aa.m[4]);
c = _mm_load_ps(&aa.m[8]);
d = _mm_load_ps(&aa.m[12]);
_mm_store_ps(&m[0],a);
_mm_store_ps(&m[4],b);
_mm_store_ps(&m[8],c);
_mm_store_ps(&m[12],d);
return *this;
};
I also used the alignment of several similar operations when possible to get better results, as well as I used the bigger number of the registers available. This case 4, but may be up to 8 (16 in 64 bits).
FINAL CONSIDERATIONS
The main intention of this was to use SSE. I yet do not test optimizations in SQRT calculations, which was the initial intention. However I achieve performance improvements in my code with the use of SSE instructions. Even improved things that I never had thought could consume time. Compiler options of the VC 2010 seen to affect in weird ways the behavior of the generated code. Maybe a inspection in the generate ASM code could identify better what the compiler did.
Memory alignment, clustering of instructions, unroll of loops, loadings and attribution of data via SSE registers can guarantee improvements in execution time.
The matrix calculations in my actual code are not heavy, and so not a problem. But I want to test Kinematics of several links, and I believe that in this scenario the matrix multiplication may become a topic of concern.
New posts about this soon.