Thank you, that does clear up some of my confusion. I will clarify the rest of what is confusing me.
Its not 100% clear what you want to do here....
My most basic requirement would be setting up lmce to use my cable box (or 2 of them) as an input(s), and be able to output to my tv without any loss (or noticable loss).
To try an summarise in a different way - practically any chipset will generate a 1080i/p screen resolution, and no CPU 'power' is required for that. The higher UIs (UI2 and UI2 Alpha Blended) will require more and more power the higher the resolution you choose, but that is not the actual screen resolution, it is the animation that requires the grunt. For those you will need decent (and compatible) hardware acceleration in your GPU, so this has no impact on your CPU requirements either. Thus very low power CPUs like the Intel Atom can easily handle UI2 on high res screens as all the work is done by the GPU. BUT - none of this has anything whatsoever to do with VDPAU.
That clears up a much of confusion right there. I would like to be able to use UI2+alpha blending with 1080 resolution, but im fine with just the overlay and no transparancy if it's going to make a major difference in price (which it sounds like it is).
VDPAU is a new API for allowing high end nVidia GPUs to do hardware acceleration of decoding video streams. Nothing to do with the screen resolution nor the 3D animation of UI2. It will be needed for decoding either high bit rate compressed video files/disks and/or the more advanced video codecs, like H264 (often used for HD video files and BD/HDDVD). Currently, we rely on software decoding in the CPU until this is integrated. So if you want to play BD or high resolution/high bit rate video files, you will need a high end CPU. However, nothing in your post indicates that you need to decode such sources.
A true newbie question here. My understanding is that when I'm watching live tv, lmce will be recording it to give me pause/rewind functionality. If I am watching/recording an HD tv show, I assume that it would be recorded with some sort of compression (which would surely eat up some cpu). Regardless wouldn't playing this video back require decoding?
You talk about HDMI/component from your cable provider, but not how you intend to supply this to the TV. HDMI pretty much has to be connected directly to your TV (HDMI capture devices are far and few between, expensive, of questionable quality or compatibility with LinuxMCE) so LinuxMCE's CPU/GPU is not involved in any way, except to control your cable box and TV... select inputs, volume, etc.
I should mention that I am thinking of setting up a hybrid box for my initial test run. For my connection from my hybrid to my tv, I'm fine with doing HDMI or Component. My tv can't do 1080p anyway. For my connection from my cable box to my hybrid, I wanted to use hdmi, but i sounds like that may not be a good idea (based on the next part).
If you capture your component, then again VDPAU has no part to play, you are capturing the video uncompressed and thus there is nothing to decompress. Capturing analogue video (Component) you will always loose quality, particularly on an HD signal, but doable and at least you can then introduce the stream directly into LinuxMCE and so record it or redirect it to other MDs. You should look for commentary on the quality of the various capture boards, and also whether they have hardware compression built in to reduce the load on your CPU when storing the file during a recording session.
How much quality loss are we talking about when using component? On one of my TVs I have tried both component and hdmi from my cable box to tv, and honestly do not see a difference. Is this non-noticable loss what you are talking about, or will it get worse once it passes through the hybrid box?