I hope you don't mind extending the discussion on tearing a bit more; I've been beating my head against this issue for live video playout to TV on Windows for more than four years.
In order to ensure there is no tearing in the video make sure Windows 7 supports the "Aero" transparency mode. EVR in vMix DOES use 3d mode so if you can't use aero, as a last resort tick the Synchronise option in the vMix settings under performance.
Indeed, with Aero enabled, the EVR renderer produces video without tearing, although I do see some judder (probable field order issues) in some content, although it seems that it is reduced by selecting deinterlace on the input. EVR without Aero in vMix (with or without the Synch option checked) still produces tearing.
I would suggest buying a cheap dedicated graphics card and a converter the secondary output to TV if native support is not available.
The intent is to create a stable and glitch-free vision mixer from a highly integrated device without external add-ons; this device works flawlessly under linux with proprietary drivers handling two NTSC live streams, doing various effects and making output to Y/C, component or composite, it just isn't a general purpose production switcher with it's original software. This box has only two PCI-e slots and cannot accomodate any more internal devices of any kind.
MPC-HC has solved the judder and tearing issues with its EVR Custom Presenter, which can use D3D Exclusive mode and doesn't require Aero. Running Aero on my box entails a substantial performance hit. I once approached some devs in that project about forking a vision mixer project to use the technology already proven for the player but got no replies, probably because of the limited 'market'. Perhaps the ideas vfl if not the code itself could be useful for vMix development? Or, please consider an open source fork of vMix Basic which may prove useful in the long run for community contributions that would benefit your full version under a dual licensing scheme (like is done with the Asterisk project).
Some observations on projects that try to combine a vision mixer with a recorder/network streamer; an Italian broadcast media software company tried with a product called Movie Cockpit and failed -- here is an email that I received some time ago:
"I'm sorry to inform you that MovieCockpit is not being developed any further
than what is currently available thru the webstore.
The effects, transition and output modules will not become available
An all-in-one software for video recording, switching, live output and
streaming has proved to be almost impossible to build (as reliable as we
wanted it to be) using the current technology.
However we are now working on 2 separate products: one recorder/player and
The recorder/player can be used by itself, but it won't switch.
The switcher/effect with live output can be used by itself, but it won't
With both application, running onto 2 separate machines (networked), you
will be able to obtain everything.
The recorder is almost done and it supports all range of BlackMagic Cards,
all video formats (2K excluded), using the following codecs: MJPEG,
Uncompressed and Cineform HD and RGB. It supports also external LTC
time-code input. Later, MXF files will be generated instead of avi.
The switcher will work with uncompressed I/O only. The switcher will require
one card for output and as many card as you need to match the number of
inputs (the current limit is 8 inputs)."
Indeed, trying to stream or record in vMix on my target platform with two NTSC live inputs is not possible at full frame rate, however I have found that playout of networked mpeg files over GigE to the box, in vMix, works quite well, at very low CPU load, so the above approach seems to have merit.