- The rules can be found at: http://www.ietf.org/rfc/rfc2279.txt?number=2279 (1)
- The table can be found at: http://www.utf8-chartable.de/unicode-utf8-table.pl (2)
From (1) UTF-8 definition In UTF-8, characters are encoded using sequences of 1 to 6 octets. The only octet of a "sequence" of one has the higher-order bit set to 0, the remaining 7 bits being used to encode the character value. In a sequence of n octets, n>1, the initial octet has the n higher-order bits set to 1, followed by a bit set to 0. The remaining bit(s) of that octet contain bits from the value of the character to be encoded. The following octet(s) all have the higher-order bit set to 1 and the following bit set to 0, leaving 6 bits in each to contain bits from the character to be encoded. The table below summarizes the format of these different octet types. The letter x indicates bits available for encoding bits of the UCS-4 character value. Yergeau Standards Track [Page 3] RFC 2279 UTF-8 January 1998 UCS-4 range (hex.) UTF-8 octet sequence (binary) 0000 0000-0000 007F 0xxxxxxx 0000 0080-0000 07FF 110xxxxx 10xxxxxx 0000 0800-0000 FFFF 1110xxxx 10xxxxxx 10xxxxxx 0001 0000-001F FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 0020 0000-03FF FFFF 111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 0400 0000-7FFF FFFF 1111110x 10xxxxxx ... 10xxxxxx Encoding from UCS-4 to UTF-8 proceeds as follows: 1) Determine the number of octets required from the character value and the first column of the table above. It is important to note that the rows of the table are mutually exclusive, i.e. there is only one valid way to encode a given UCS-4 character. 2) Prepare the high-order bits of the octets as per the second column of the table. 3) Fill in the bits marked x from the bits of the character value, starting from the lower-order bits of the character value and putting them first in the last octet of the sequence, then the next to last, etc. until all x bits are filled in. The algorithm for encoding UCS-2 (or Unicode) to UTF-8 can be obtained from the above, in principle, by simply extending each UCS-2 character with two zero-valued octets. However, pairs of UCS-2 values between D800 and DFFF (surrogate pairs in Unicode parlance), being actually UCS-4 characters transformed through UTF-16, need special treatment: the UTF-16 transformation must be undone, yielding a UCS-4 character that is then transformed as above.Let's now explain how to create different UTF-8 representations of the same char following the afore mentioned rules. Let's take for example the "<" (less than sign.). If we look at the table (2) we'll see that the HEX encoding for "<" is:
ASCII | HEX | BINARY |
< | %3C | 00111100 |
If we look at the binary representation we can see that is perfectly valid as it follows the pattern: 0xxxxxxx but what if we would like to use two octets (overly long representation) to encode 00111100 (our less than sign)? To do so let's split it in 2 chunks of 6 and 2 elements: 00 111100 and let's start by substituting the x's with our two chunks, from low order to high order bits ( right to left ) and padding the remaining bits with 0's.
1st octet | 2nd octet | |
110xxxxx | 10xxxxxx | |
BINARY | 11000000 | 10111100 |
HEX | %C0 | %BC |
Now analogously if we want to use a three octects encoding we have:
1st octet | 2nd octet | 3rd octet | |
1110xxxx | 10xxxxxx | 10xxxxxx | |
BINARY | 11100000 | 10000000 | 10111100 |
HEX | %E0 | %80 | %BC |
Thanks to these encodings the guy at: http://i8jesus.com/?p=55 managed to bypass SiteMinder filters. A great thank to my colleague daigoro for helping me in understanding this subject better.