At 01:14 PM 8/28/2002 -0400, you wrote:
it might be
possible to generate a waveform which a liberal CC decoder could be able to
handle iff
That's what I'm thinking. The more modern CC decoders built in today's TVs
might be really tolerant.
- the length and timing of the clock run-in were modified to put it
at the beginning of the 2600 active screen (with the data bits comprising
the last 121 pixels of the active screen)
I'm not even sure whether today's decoders even require clock run in. I
hope they don't because I don't see how the 2600 can generate video that
far to the left of the screen, which the 2600 generally considers to be
overscan.
2. Although it is possible to generate an approximation of the clock run-in
using the 2600 player graphics, generating the actual data bits would be
impossible. This is because the data bits are modulated using the same
frequency as used for the clock run-in. Each cycle spans 7.11 pixels, each
bit is 17/9 cycles long and 13.43 pixels wide. Obviously the 2600 is not
up to the task of creating even an approximation of such a complex signal
It might be able to depending on how the decoder works. When I looked at
the sourcecode of the decoder, the way it does it is to sample the scanline
in multiple places and as long as it finds white in at least ONE sample, it
treats it as a 1, else it's a 0. So the 2600 could generate a crude
approximation with "wrong" pixel spacings and widths as long as it is
within the tolerance of the decoder.
When the CC spec says that an encoding should have certain timings, etc...
I don't think it's saying that all decoders must reject the signal if it
falls just out of spec. It's more of a guideline for the encoders. You'd
expect the decoders to try to work with a bad signal more. Decoders have
to be tolerant to a certain degree in order to handle bad transmissions.
(each pixel of the data bits would need to be controlled).
Not necessarily. As long as the decoder can at least read the majority of
pixels then it might be able to create a list of bytes possible to send
since by leaving those zones unlit you are basically sending a zero
bit. That might be broad enough to get the job done.
- The only other possibility is if the CC decoder doesn't required
the bits to be modulated. Then the playfield graphics could be used, as
has been previously hypothesized, to generate the data bits.
When you say modulated you mean narrow black spaces between the wide data
bits? I don't think decoders require this although the spec advises it. I
think they figure out the boundaries between the bits based on timing
thresholds.
Remember that closed captioning started back in the early 70s when
technology as crude as the 2600 was a pipe-dream. Decoders were probably
much more finicky about the waveform back then since they were probably
just riding the analog wave in real-time like a needle on the groove of a
record. The CC spec takes this into account to try to make sure encoders
continue to produce signals that ancient CC decoders can handle. But I'm
sure modern CC decoders are thinking more in terms of digitally chopping
the scanline down into a series of samples that resolve to highs and lows
and then grouping the samples together and assessing whether the expected
data bit is a 1 or a 0. That would be pretty tolerant of bits starting a
bit too soon or ending a bit too late.
----------------------------------------------------------------------------------------------
Archives (includes files) at http://www.biglist.com/lists/stella/archives/
Unsub & more at http://www.biglist.com/lists/stella/