logo

Live Production Software Forums


Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

2 Pages<12
Options
Go to last post Go to first unread
JoseL  
#21 Posted : Tuesday, October 30, 2018 12:17:49 AM(UTC)
JoseL

Rank: Advanced Member

Groups: Registered
Joined: 4/15/2018(UTC)
Posts: 64
Man
Location: Spain

Thanks: 6 times
Was thanked: 19 time(s) in 13 post(s)
Originally Posted by: Ittaidv Go to Quoted Post
I personaly use x299 with asus ws motherboards, they use plx chips on their motherboard, so the amount of lanes of the intel 7900x is doubled. I get better results than these stats in terms of lost frames, and can do the same amount of multicorders. I understand the idea of going for threadripper, but don't think it's the best idea if you really need high performance.


Please can you explain “amount of lines on 7900x is doubled”, 7900x has 44 lanes, can you use x16 on all pcie slots on your board?.

I have designare ex and if i add second m2 slot i loose one pcie express x4. Or if add third m2, i loose some sata ports. All are shared on gygabyte, is different on your asus?.

Vuurmannetje  
#22 Posted : Tuesday, October 30, 2018 12:26:58 AM(UTC)
Vuurmannetje

Rank: Advanced Member

Groups: Registered
Joined: 5/14/2018(UTC)
Posts: 112
Location: Netherlands

Thanks: 3 times
Was thanked: 28 time(s) in 18 post(s)
If there was a magical way to double PCIE lanes everyone would do it.

Would love to know your other specs. Always good to see "Ultimate" builds, but lets get a full picture.

Alternatively, you could be referring to the Dual socket boards, in which case you get double the lanes but need 2 CPUs




Ittaidv  
#23 Posted : Tuesday, October 30, 2018 10:08:30 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
Originally Posted by: Vuurmannetje Go to Quoted Post
If there was a magical way to double PCIE lanes everyone would do it.

Would love to know your other specs. Always good to see "Ultimate" builds, but lets get a full picture.

Alternatively, you could be referring to the Dual socket boards, in which case you get double the lanes but need 2 CPUs






Actually there is a magical way, it's called a PLX chip. Martin referred to it in his z170 builds, where they bought a motherboard with plx chips to double the lanes or their 7700k to allow for more bandwidth. Just google it, it's not hocus pocus, it actually works fine and allows for more I/O in vMix :)

Martin also mentioned these, in this thread for example:

https://forums.vmix.com/...et-Issues-and-Discussion
Ittaidv  
#24 Posted : Tuesday, October 30, 2018 10:16:09 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
Originally Posted by: JoseL Go to Quoted Post
Originally Posted by: Ittaidv Go to Quoted Post
I personaly use x299 with asus ws motherboards, they use plx chips on their motherboard, so the amount of lanes of the intel 7900x is doubled. I get better results than these stats in terms of lost frames, and can do the same amount of multicorders. I understand the idea of going for threadripper, but don't think it's the best idea if you really need high performance.


Please can you explain “amount of lines on 7900x is doubled”, 7900x has 44 lanes, can you use x16 on all pcie slots on your board?.

I have designare ex and if i add second m2 slot i loose one pcie express x4. Or if add third m2, i loose some sata ports. All are shared on gygabyte, is different on your asus?.




Not all of them, you can do maximum 4x 16 lanes on an x299 sage motherboard for example.

In reality you will never use this though, since most cards are only maximum x8 pcie gen.2, so counting a gpu on one slot, you could easily fill all 6 other slots with 8 input sdi capture cards, or even quad 4K cards. To my knowledge there is no capture card on the market which uses x16 gen.3.
Ittaidv  
#25 Posted : Tuesday, October 30, 2018 11:10:33 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
To give you an idea: the blackmagic 8k capture card, which has 4 12G sdi inputs, only uses 8 gen3 lanes. This means you could easily stash 6 of them in an x299 board with plx chips, and have enough lanes left for your gpu and ssd's, etc..

Here are the specs of the bm card:
https://www.blackmagicde...klink/techspecs/W-DLK-34

And here you can read about the x299 ws sage board, which I use in x299 builds nowadays:

https://www.asus.com/us/...s/WS-X299-SAGE/overview/

The manual says you can run one slot in x16, while all the other 6 are occupied with x8 cards.

The bm quad 2, which will provide you with 8 HD sdi inputs, will only use 8 gen2 lanes. Gen2 lanes are only 8GB/s, while gen 3 lanes are 16 GB/s. In theory, if you would use exotic hardware to split your gen3 lanes in half you could stash 12 of these cards into a single system, which would provide you with 96 inputs.

The yuan 4x 4k p30 capture card only uses x4 gen.2 lanes. In theory, again with even more exotic hardware, you could split your lanes and take in almost 200 of these inputs into a single system.

I know theory is far from reality, I think going over 30 ish inputs will probably run you into troubles, I personaly never tested any system with over 25 (50i) in/outputs running simultaniously. Also when you are talking about this much bandwidth going over your pcie lanes, you should consider the bandwidth of your gpu, which is still limited to x 16 lanes, etc..

When we talk about using dual socket systems, Martin ever said that it was introducing a lot of extra latency and was not worth the effort. Same for using SLI or dual graphics cards. I would ever like to try it though, just to see what we're talking about. Hence I'm also happy that someone tried a threadripper system and published the results. For now I just don't think it's worth it (yet), when comparing to an x299 system.

thanks 1 user thanked Ittaidv for this useful post.
JoseL on 10/30/2018(UTC)
JoseL  
#26 Posted : Tuesday, October 30, 2018 8:33:05 PM(UTC)
JoseL

Rank: Advanced Member

Groups: Registered
Joined: 4/15/2018(UTC)
Posts: 64
Man
Location: Spain

Thanks: 6 times
Was thanked: 19 time(s) in 13 post(s)
All of this need some tests. Maybe it works wonderfull or maybe it give us all sort of problems. It can add latency or it can drop frames sometimes, better to test and be sure that it do not add any type of artifacts when you are using same pcie lanes between different cards. Good to know that one more tool is on the pocket.

regards,

Jose.
Vuurmannetje  
#27 Posted : Wednesday, October 31, 2018 12:21:32 AM(UTC)
Vuurmannetje

Rank: Advanced Member

Groups: Registered
Joined: 5/14/2018(UTC)
Posts: 112
Location: Netherlands

Thanks: 3 times
Was thanked: 28 time(s) in 18 post(s)
So its not doubling lanes but slots. And since most cards use 4 or 8 lanes you get to use plenty of slots.

That sounds a lot more realistic.
Ittaidv  
#28 Posted : Thursday, November 1, 2018 7:32:56 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
Originally Posted by: Vuurmannetje Go to Quoted Post
So its not doubling lanes but slots. And since most cards use 4 or 8 lanes you get to use plenty of slots.

That sounds a lot more realistic.


No, it's doubling lanes, by doing some sort of traffic management. How could a 44 lanes cpu do x16 4 way otherwhise?

Vuurmannetje  
#29 Posted : Thursday, November 1, 2018 7:59:12 AM(UTC)
Vuurmannetje

Rank: Advanced Member

Groups: Registered
Joined: 5/14/2018(UTC)
Posts: 112
Location: Netherlands

Thanks: 3 times
Was thanked: 28 time(s) in 18 post(s)
Yeah, did some reading up on PLX, and its not multiplying any lanes. Its just doing traffic management.

Basically it uses existing lanes to funnel multiple PCIE devices into fewer lanes. You are still limited by the PCIE bandwith available on your CPU, but PLX allows you to connect more devices and by balancing properly you can get them all to work. In some specific setups you can get multiple 16 lane devices to work if their bandwith usage can be managed properly.

So in our case, one can get a motherboard with PLX controller to get more PciE slots to use the same x16 lanes, especially when we use x4 and x8 capture cards.
There are also stand alone PLX controllers that allow you to do the same on any motherboard.

I've seen Linus do something crazy with like 16 GPUs for virtual machines on a dual socket motherboard.
So no, this is not a trick specific to intel, its pretty much about PciE in general. This however does offer an interesting option to Threadripper builds, as the 2990WX has 64 PCIe lanes, and technically you can get a whole load of capture cards on that with a PLX setup.

Ittaidv  
#30 Posted : Sunday, November 4, 2018 2:39:45 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
Originally Posted by: Vuurmannetje Go to Quoted Post
Yeah, did some reading up on PLX, and its not multiplying any lanes. Its just doing traffic management.

Basically it uses existing lanes to funnel multiple PCIE devices into fewer lanes. You are still limited by the PCIE bandwith available on your CPU, but PLX allows you to connect more devices and by balancing properly you can get them all to work. In some specific setups you can get multiple 16 lane devices to work if their bandwith usage can be managed properly.

So in our case, one can get a motherboard with PLX controller to get more PciE slots to use the same x16 lanes, especially when we use x4 and x8 capture cards.
There are also stand alone PLX controllers that allow you to do the same on any motherboard.

I've seen Linus do something crazy with like 16 GPUs for virtual machines on a dual socket motherboard.
So no, this is not a trick specific to intel, its pretty much about PciE in general. This however does offer an interesting option to Threadripper builds, as the 2990WX has 64 PCIe lanes, and technically you can get a whole load of capture cards on that with a PLX setup.



Thanks for explaining how it works :) I've been using motherboards with plx chips since a long time already, first because of SLI, later for vMix.

I'm for sure too someone who likes to push it. But with 64 pcie v3 lanes available, and 128 lanes on some other models of threadripper, I can't even imagine what useful things I would do with those.

Like I said earlier: the amount of lanes is not everything. In the end all data still needs to pass many other devices in your computer such as a GPU. It's cool not to have the bottleneck in the amount of lanes, but I would not be too sure the bottleneck will not situate around the GPU or somewhere else. I also once read that the speed of the DMI has some influence on vMix as well, just can't find it back right now.

Something else: threadrippers are great, but because of the architecture of the processor (it's actually multiple ryzen's 'glued together' with some sort of infinity fabric') latency will never be as good as with something such as a good old 7700k. PLX chips also add some latency, but I never experienced huge trouble because of it (as far as I can judge).

I think it's cool to experiment with threadrippers though, and even better to publish the results. If I had all the cash in the world, I would for sure just buy one to play with :) What would be kick ass, is to put a threadripper next to an i9 7900x and compare multiple things, such as latency, dropped frames,.. I'm located in Belgium and have acces to many 7900x systems with plx, hit me up if you're up for it!
Vuurmannetje  
#31 Posted : Sunday, November 4, 2018 7:55:15 AM(UTC)
Vuurmannetje

Rank: Advanced Member

Groups: Registered
Joined: 5/14/2018(UTC)
Posts: 112
Location: Netherlands

Thanks: 3 times
Was thanked: 28 time(s) in 18 post(s)
Originally Posted by: Ittaidv Go to Quoted Post
Originally Posted by: Vuurmannetje Go to Quoted Post
Yeah, did some reading up on PLX, and its not multiplying any lanes. Its just doing traffic management.

Basically it uses existing lanes to funnel multiple PCIE devices into fewer lanes. You are still limited by the PCIE bandwith available on your CPU, but PLX allows you to connect more devices and by balancing properly you can get them all to work. In some specific setups you can get multiple 16 lane devices to work if their bandwith usage can be managed properly.

So in our case, one can get a motherboard with PLX controller to get more PciE slots to use the same x16 lanes, especially when we use x4 and x8 capture cards.
There are also stand alone PLX controllers that allow you to do the same on any motherboard.

I've seen Linus do something crazy with like 16 GPUs for virtual machines on a dual socket motherboard.
So no, this is not a trick specific to intel, its pretty much about PciE in general. This however does offer an interesting option to Threadripper builds, as the 2990WX has 64 PCIe lanes, and technically you can get a whole load of capture cards on that with a PLX setup.



Thanks for explaining how it works :) I've been using motherboards with plx chips since a long time already, first because of SLI, later for vMix.

I'm for sure too someone who likes to push it. But with 64 pcie v3 lanes available, and 128 lanes on some other models of threadripper, I can't even imagine what useful things I would do with those.

Like I said earlier: the amount of lanes is not everything. In the end all data still needs to pass many other devices in your computer such as a GPU. It's cool not to have the bottleneck in the amount of lanes, but I would not be too sure the bottleneck will not situate around the GPU or somewhere else. I also once read that the speed of the DMI has some influence on vMix as well, just can't find it back right now.

Something else: threadrippers are great, but because of the architecture of the processor (it's actually multiple ryzen's 'glued together' with some sort of infinity fabric') latency will never be as good as with something such as a good old 7700k. PLX chips also add some latency, but I never experienced huge trouble because of it (as far as I can judge).

I think it's cool to experiment with threadrippers though, and even better to publish the results. If I had all the cash in the world, I would for sure just buy one to play with :) What would be kick ass, is to put a threadripper next to an i9 7900x and compare multiple things, such as latency, dropped frames,.. I'm located in Belgium and have acces to many 7900x systems with plx, hit me up if you're up for it!


Haha, if I have something down there Ill sure hook up.
Yesterday we did a test of our online show, and ran about 16 Vcalls and 6 other NDI sources at 20ms render time stable, on my 2990WX, with plenty of CPU to spare (avg 8% use). So for those workflows it feels great. I use a lot of composed multiview scenes in this show.

One thing of note for the TR builds is that memory speed directly affects CPU performance. This is because the speed of the infinity fabric is linked to memory clock. Thus a TR at 3600Mhz performs miles better then non OC speeds. So when building make sure to get the highest memory speed you can get that is certified for your motherboard.








Ittaidv  
#32 Posted : Monday, November 5, 2018 8:33:22 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
20 ms is high! I run 16 sdi inputs + 2 sdi outputs + 2 +NDI oputputs, while playing back like 4-5 movies on an i9 rig, and get max 5 ms... The biggest issue with x299 is that render times still get a little jumpy once in a while up to 10-15 or so when using the external outputs, but it's ultra stable and no frames are dropped. Did you ever open statistics and check for dropped frames?
Vuurmannetje  
#33 Posted : Monday, November 5, 2018 11:24:47 PM(UTC)
Vuurmannetje

Rank: Advanced Member

Groups: Registered
Joined: 5/14/2018(UTC)
Posts: 112
Location: Netherlands

Thanks: 3 times
Was thanked: 28 time(s) in 18 post(s)
Originally Posted by: Ittaidv Go to Quoted Post
20 ms is high! I run 16 sdi inputs + 2 sdi outputs + 2 +NDI oputputs, while playing back like 4-5 movies on an i9 rig, and get max 5 ms... The biggest issue with x299 is that render times still get a little jumpy once in a while up to 10-15 or so when using the external outputs, but it's ultra stable and no frames are dropped. Did you ever open statistics and check for dropped frames?


Not getting any dropped frames in this setup. In my default preset (which is around 20 inputs) my render time stays under 10 ms tho. So its just the excessive use of multiviews and many many input sources that pumps it up. Basically, there's currently close to 100 inputs running. Of which around 22 are live sources. Im also using all 4 NDI outputs with their own scene in it.

This is a non issue for this show as long as there's no frame drops or audio glitches.
Vuurmannetje  
#34 Posted : Tuesday, November 6, 2018 8:49:15 AM(UTC)
Vuurmannetje

Rank: Advanced Member

Groups: Registered
Joined: 5/14/2018(UTC)
Posts: 112
Location: Netherlands

Thanks: 3 times
Was thanked: 28 time(s) in 18 post(s)
While experimenting I just found out I can halve my render times by disabling use display settings im external. So that puts me at 5ms on the big file without actors connected
niemi  
#35 Posted : Tuesday, November 6, 2018 9:51:54 PM(UTC)
niemi

Rank: Advanced Member

Groups: Registered
Joined: 2/16/2017(UTC)
Posts: 178
Location: Denmark

Thanks: 27 times
Was thanked: 18 time(s) in 15 post(s)
On a note about X299, both the WS X299 SAGE/10G and the not yet released GIGABYTE X299-WU8 have dual Broadcom PLX PEX8747 chips, allowing for:

x16/x16/x16/x16 or x16/x8/x8/x8/x8/x8/x8
niemi  
#36 Posted : Wednesday, November 7, 2018 1:20:03 AM(UTC)
niemi

Rank: Advanced Member

Groups: Registered
Joined: 2/16/2017(UTC)
Posts: 178
Location: Denmark

Thanks: 27 times
Was thanked: 18 time(s) in 15 post(s)
Originally Posted by: millst Go to Quoted Post
some more digging and one of the decklink cards is sharing pcie lanes with an SSD so this is dropping frames and increasing latency on one of the cards.

How did you arrive at that conclusion? And what did you do to fix it?

Ittaidv  
#37 Posted : Saturday, November 17, 2018 6:52:52 AM(UTC)
Ittaidv

Rank: Advanced Member

Groups: Registered
Joined: 12/19/2013(UTC)
Posts: 600
Man
Belgium
Location: Belgium

Thanks: 75 times
Was thanked: 91 time(s) in 75 post(s)
Originally Posted by: Vuurmannetje Go to Quoted Post
While experimenting I just found out I can halve my render times by disabling use display settings im external. So that puts me at 5ms on the big file without actors connected


That's insane actually.. I run at 6-10 ms on an 7900x machine, with 16 sdi inputs, 2 sdi outputs, 2 NDI outputs and 7 playlists with videos and pictures.

Kpronin  
#38 Posted : Monday, November 11, 2019 5:45:13 PM(UTC)
Kpronin

Rank: Newbie

Groups: Registered
Joined: 12/17/2014(UTC)
Posts: 4

Thanks: 2 times
Originally Posted by: Ittaidv Go to Quoted Post
Originally Posted by: Vuurmannetje Go to Quoted Post
While experimenting I just found out I can halve my render times by disabling use display settings im external. So that puts me at 5ms on the big file without actors connected


That's insane actually.. I run at 6-10 ms on an 7900x machine, with 16 sdi inputs, 2 sdi outputs, 2 NDI outputs and 7 playlists with videos and pictures.



What´s your PC Spec to do that?

Thanks
Xavi137  
#39 Posted : Thursday, March 24, 2022 3:23:13 AM(UTC)
Xavi137

Rank: Member

Groups: Registered
Joined: 5/10/2021(UTC)
Posts: 28

Thanks: 10 times
Greetings

I know this is an old thread.

But I ask basic question.

How are NDIs generated from DeckLink Card?

Thanks
RichShumaker  
#40 Posted : Saturday, July 2, 2022 3:55:49 AM(UTC)
RichShumaker

Rank: Advanced Member

Groups: Registered
Joined: 4/4/2016(UTC)
Posts: 233
United States
Location: Not Los Angeles CA

Thanks: 86 times
Was thanked: 28 time(s) in 23 post(s)
Checking in on this system to see how it has held up. Oddly the 1950X is still one of the HEDT options. Also the X299 MB for HEDT(High End Desktop). Weird it is very similar setups almost 4 years later.
Users browsing this topic
2 Pages<12
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.