Traditionally a CRC is a check "word" attached to a stream of data bits used to detect missing or altered bits in the data stream.
The notion of "forward" or "reverse" polynomial relates to how a controller constructs the bit stream, either Most-Significant-Bit first or Least-Significant-Bits first. In general the bit order is only important when the CRC is used to correct bits when an error is detected.
The number of bits in the CRC determine how many error bits can be detected, how large a span of consecutive bits can be detected and how many can be corrected over how large a field of data.
The types of errors that CRCs are best at detecting are burst errors. The size of a data field is selected so as to keep a likely error burst affects to just one field and as few bits as possible.
For most current implementation only the error detection quality of the CRC seems useful as the re-transmission of data has a low cost. For systems where the re-transmission cost is high the correction quality becomes of greater importance.
The spiral nature of the data track on audio CD and video DVD discs use this error correction feature of CRCs. When the data stream can be corrected more quickly in memory than repositioning the read head to re-read the data. Also the data used for audio and video playback does not need to be perfect. Small errors tend to go unnoticed by humans when the content is rendered in a timely fashion.
post edited by dan1138 - 2020/02/17 22:12:07