Posts Tagged bom

BOM – Is it part of the data?

This is a post in response to a comment at Ben Nadel’s blog by PaulH which I think is an interesting and important discussion, but sufficiently off-topic to the blog entry at hand that I didn’t want to completely derail the on-topic discussion.

Whereas initially BOM (Byte Order Marker U+FEFF) was intended to indicate the order of the bytes when dealing with systems which may have had little-endian or big-endian CPU’s, and so written characters in a byte order for which they are most suited while potentially interoperating with systems with opposite endianness. Their use in recent years has mostly been as a hint to byte stream -> character array string parsers as to the encoding of the string. For example, you can identify UTF-8 encoded files with a leading BOM because they will start with 0xEF 0xBB 0xBF (Unicode U+FEFF). UTF-16BE will start with 0xFE 0xFF, UTF-16LE will start with 0xFF 0xEF, and so on.

At a conceptual sense, byte order markers are not considered to be part of the data. In fact, U+FEFF is a Zero Width Non-Breaking Space, and is considered obviated within data because of its use as a BOM. You’re supposed to use U+2060 WORD JOINER instead of U+FEFF ZWNBSP. The fact that U+FEFF is a zero-width character is part of why it was chosen as BOM – if a system reads the encoding correctly but doesn’t know how to deal with BOM, it will be interpreted instead as an invisible character – exactly what we’re seeing with CFHTTP.

The problem in this case is that no character, not even zero-width characters are permitted before the processing instruction in an XML document. ColdFusion’s cfhttp function preserves the BOM as part of the data, while it’s xmlParse() function fails to handle it correctly. This is an inconsistency, and I suspect you may be able to start a holy war over which feature has the bug.

I Googled around for a while to see if I could find a source that said definitively whether a BOM is considered part of the Unicode data, or simply a hint which is intended to be dropped as part of the Unicode decoding (eg, we convert multiple bytes into single characters in the case of characters greater than U+00EF, likewise we consume the hint as a means of informing us how the file is encoded, and nothing else). This latter case has always been my understanding of it, and indeed this behavior is reinforced by many systems, including ColdFusion itself in some arenas (eg, reading a file with cffile that contains a leading BOM – the BOM will be discarded). Unfortunately other systems do seem to retain BOM, but it’s impossible to say for those systems whether this was a design decision or failure to address BOM at all as they would look the same to an outside observer.

I couldn’t find much in the way of a definitive statement for or against BOM being retained as part of the character array. The closest I could find is something you alluded to – XML 1.0 specifies that BOM is not considered to be part of the data, and should be used only to identify the endianness of the data being passed to it, and otherwise ignored. That wording is a bit ambiguous since in its original context, it’s talking about how to handle BOM at the start of a byte stream.

As to the applicability of this part of the XML standard to ColdFusion’s XML parser – ColdFusion’s XML parser isn’t dealing with a byte stream, it’s dealing with a character array. By the point we call xmlParse(), we have gotten past our need for the BOM (if we’ve already parsed the byte stream as characters, BOM can no longer affect what we do with those characters), so the XML standard on dealing with BOM no longer applies.

So this all comes down to: Is BOM part of the data or just metadata? Conceptually it’s part of the metadata, whereas the question is should it be preserved in the character array. I land firmly in the camp of it being purely metadata, and with it being desirable to discard it as part of character parsing. ColdFusion treats it as metadata in some instances and as part of the data in other instances, this inconsistency is where we see the original error.

Some software even provides an option as to whether or not BOM should be preserved as part of the data:

In any event, CF should be consistent with how it handles BOM, and if it considers it as part of the data, then its string functions should consistently handle it as such (ie, xmlParse() should ignore it). If it considers it to be metadata, then it should always be discarded when parsing the byte stream into a character array. The fact that in some cases it discards BOM when parsing characters says to me that the design decision by Macrodobeia was to discard it when parsing characters, since someone had to have written code to this effect, but that it wasn’t applied consistently throughout.

, , ,

No Comments