# Decimal data type precision on SQL Server

The MSDN article on decimal and numeric data types explains that precision in the decimal data type is the:

…total number of decimal digits that can be stored, both to the left and to the right of the decimal point.

I interpreted this statement as the number of decimal digits left and right of the decimal point. Meaning if I have a column `decimal(3,2)`, I expect there to be 3 digits to the left, and 3 digits to the right of the decimal point. However, this actually means that you have 3 total digits, and 2 digits after the decimal point, with a maximum value of 9.99 for a type of `decimal(3,2)`.

If you are trying to create a 7 figure currency field, you would need `decimal(9,2)`, which would allow you to store a maximum value of 9999999.99.

According to the storage table on the article, a decimal precision of 9 takes up 5 bytes. Technically, this should only allow for a range of 1,048,575 to -1,048,575 in a 40 bit register. I can’t seem to figure out how Microsoft is pulling that many numbers out of such a small register. Unless it is unsigned, then it would allow for a range of 0 to 274,877,906,943. But this is not the case as it allows you to store -9999999.99 as well as 9999999.99 in `decimal(9,2)`.

### Norbert Krupa

Technical Consultant