Setting the intraperiod is how many there are between every I frame. You can only start decoding a stream from an I-frame (technically an IDR frame, but all of our I-frames are).
Using "raspivid -g -1" is a little bit of a mess over signed vs unsigned values in the code. The underlying value used by the firmware is unsigned. It is read off the command line with "sscanf("%u") as unsigned, but into a signed value. Adding -v prints it out as signed.
Even if that were fixed so that they are all unsigned, there is a niggle in sscanf("%u", "-1"). The C99 spec says
"Matches an optionally signed decimal integer, whose format is the same as expected for the subject sequence of the strtoul function with the value 10 for the base argument. The corresponding argument shall be a pointer to unsigned integer."
So that actually returns 4294967295, and that is the number of frames that I would expect the encoder to produce between I frames (ignoring potential truncation elsewhere into 16bit values). At 30fps that would be around 4.5 years, so effectively never.
If you really want never, then specify 0 as that is defined as producing one I-frame, and then all P-frames. In doing so be aware that you will never be able to seek or error recover the stream. Lost data will almost certainly result in an unplayable stream.
If you're gaining significantly on latency then I'd expect it mainly to be down to the reduced size of the data being sent across the link. The actual encode latency is likely to be a smidge higher compared to an I-frame due to the need for motion estimation over the frame, although you'll then gain some of that back as there is less data to be CABAC encoded.
Not much I can say over compatibility with slice decoding on other platforms. Where it may be getting upset is on timestamps, as all slices in a frame should have the same timestamp. I know that when I was last working on Android it didn't like having fragmented encoded frames, therefore it may well be the same still.
Fundamentally using slices is just chopping the frame up into smaller independent encodes, and each encode can be thrown down the pipe as soon as it is complete rather than once the full frame is encoded. This compromises compression efficiency slightly as it can't reference across the slice boundary.
On a 1080p encode it takes around 40ms to encode (pipelined, hence it can be longer than a frame time), therefore if slice support worked above 720p you would expect to see that time roughly halve.
Software Engineer at Raspberry Pi Trading. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.