core audio - How to manually convert 8.24-bit deinterleaved lpcm to 16-bit lpcm? -
i have chunk of data (void*) 2 ch, 44100 hz, 'lpcm' 8.24-bit little-endian signed integer, deinterleaved. need record chunk file 2 ch, 44100 hz, 'lpcm' 16-bit little-endian signed integer.
how convert data? can imagine need this:
uint databytesize = sizeof(uint32) * samplescount; uint32* source = ...; uint32* dest = (uint32*)malloc(databytesize); (int = 0; < samplescount; ++i) { uint32 sourcesample = source[i]; uint32 destsample = sourcesample>>24; dest[i] = destsample; }
but how convert deinterleaved interleaved?
ok, i've spent time investigating issue , realized question contains few information answered =) heres deal:
first, non-interleaved: thought this: l1 l2 l3 l4...ln r1 r2 r3 r4...rn turned out in data right channel missing. turned out wasn't non-interleaved data, plain mono data. , yes, should multiple buffers in case data non-interleaved. if it's interleaved, should l1 r1 l2 r2 l3 r3 l4 r4...
second, actual transformation: depends on range of samples. in case (and in case core audio involved if i'm correct) fixed-point 8.24 values should range between (-1, 1), while 16-bit signed values should range between (-32768, 32767). 8.24 value have first 8 bits set either 0 (in case positive) or 1 (in case negative). first 8 bits should removed (preserving sign ofc). can remove many trailing bits want - it'll reduce quility of sound, wont ruin sound. in case of converting 16 bits signed format, bits 8-22 (15 bits is) contain data need use sint16. bit 7 can used sign bit. convert 8.24 sint16 need shift 9 bits right (9 because need preserve sign) , cast sint16
11111111 10110110 11101110 10000011 - > 11111111 11111111 (11011011 01110111)
00000000 01101111 00000000 11000001 - > 00000000 00000000 (00110111 10000000)
that's it. nothing more iterating through array , shifting bits right. hope that's gonna save couple of hours.
Comments
Post a Comment