You can sometimes find block diagrams for chipsets, but how those chipsets end up being implemented on a given motherboard can change. Sometimes you can find them in the manuals, sometimes you need to go looking. You can generally find something though. For the Threadripper Pro series, I haven't done a build there yet, but I've been eyeballing the ASUS Pro WS WRX90E-SAGE for a future build. The Threadripper Pro's are even more badass because they are running with 128 PCIE lanes, giving you *all the IO* to do *whatever you want to*. You can find the manual for the
WRX90E-SAGE here. Page A-1 of the manual contains a block diagram for the motherboard.
This motherboard features 7 x16 physical slots, 4 that are wired at 16x to the CPU, and 3 more (all 16x physical, but one is only 8x electrically) possibly combined with some of the USB3 ports (not sure on the "with REDRIVER" annotation). Still need to look more into that. Also has a total of 4!! NVMe slots that are directly wired to the CPU, without going through the chipset bottleneck.
But generally, when searching for a motherboard, start with the processor you want to use, find a board that has the physical geometry you are looking for (how many and what types of PCI-E lanes, etc), and then look for a block diagram for that motherboard and see how everything is wired up.
Generally speaking, a GPU *needs* to be wired to the CPU. Decklink cards *should* be wired to the CPU if you don't want to run into bandwidth issues (ie, if you want more than, say, 2 SDI links in total). If you are doing large amounts of NDI, you *may* need to wire a 10G NIC directly to the CPU, depending on if you are doing full-frame/high-bandwidth NDI or NDI|HX. If you are trying to do multicording, you probably want to do that to NVMe drives that have high write speeds and are directly wired to the CPU, or to SSDs that are connected to a HBA (host-bus adapter; sometimes called a drive controller) which is wired to the CPU. If you look at... basically every motherboard block diagram, the onboard SATA ports are wired to the chipset, so your bottleneck will be at the link from the CPU to the chipset.
The biggest thing to keep in mind is that people think about obvious bottlenecks: CPU utilization. RAM utilization. NIC utilization. IOPS on disks. But there are other, hidden bottlenecks present in the system, based on where your IO is happening, and how it's plumbed into the CPU. Don't stick dozens of things on slots/ports connected to the chipset and then wonder where your performance went.
If I had one bone to pick with Intel, it was how they ended up throttling their consumer CPUs around the 6th or 7th generation Intel, whenever they introduced the i9 line. Prior to then, you could get -XE processors (although they weren't yet branded as such) in the i7 line. First they moved them to the i9's, and then they brought regular consumer CPUs into the i9 lineup that only had (I think) the 20 PCI-E lanes (plus the DMI to the chipset), and then the (formerly i7 then formerly i9) CPUs became the i9-nnnXE's
Honestly, unless the use case was very small, I wouldn't build a switcher on anything less than an i9-NNNXE, a Xeon processor (or a DP Xeon, depending on how much IO), or a Threadripper (or an EPYC, I suppose, AMD's server CPU). And my personal preference would be a Threadripper/Pro. However, I must acknowledge a tendency to overbuild, not just for today but tomorrow. But I can understand how compelling a Xeon-DP could be, based on availability of used systems/etc. A single Xeon or i9-NNNXE again is more... limited (40/48 lanes depending on the generation), but if you aren't doing a ton, could see them working fairly nicely.