I know that ui2+alpha blending has that horrible tearing problem, and I've accepted that. However, I'd like to understand it a little more than "It's just the NVIDIA driver". What is causing this problem and why have I heard people say that it will probably not be fixed anytime (on NVIDIA's side) soon. Thanks.
I think it is hasty simply to say "It's just the nVidia driver."
1) Firstly, it is clearly not a performance issue, as the exact same issue occurs on all chipsets, in the same way. This wouldn't be the case if it was simple performance.
2) When I run 3D games on the same system, from KDE, which are massively more taxing in terms of graphics, I can easily get perfectly smooth animation without any tearing at all.
3) The tear line (for me at least) is in exactly the same place on the screen all the time, about 10% from the top, or so.
4) The tear line appears to have a Z shape to it, actually back tracking, then continuing on its way across the screen a small distance further down the screen - others have reported this as well.
I cannot explain point 4 at all, it makes no sense whatsoever unless it is an artifact of the way my LCD scans out the image (this makes no sense either) so I'm just going to leave that point hanging and unexplained!
Point 3, to me, seems to suggest that the Vsync is working perfectly, and that the Vblank interval is more than sufficient for OpenGL to redraw the graphics before flipping the screen. The only explanation I can offer is that the graphics task is well within the card's capabilities, and the Vsync works, however it seems to be syncing it to the wrong time. ie the flip of the frame buffers is happening a set interval after screen re-refresh starts.
My take on all this is that it cannot be purely the driver due to point 2. So perhaps it is a combination of how the Orbiter code interacts with the driver in some way. One other point floating around in my mind is that I seem to recall that LMCE was switched from Ubuntu to Kubuntu and thus GNOME to KDE expressly because the KDE interface allowed graphics acceleration within the second + "virtual desktops" that LMCE runs within, and there was some kind of limitation on this within GNOME. I wondering if perhaps there is some issue in screen flipping when the Orbiter is animating to screens that are not the main KDE desktop screen...
I know this doesn't help fix the issue, just thought I would document my toughts on it, particularly the ones that seem to point away from a simple driver issue ... although if my last pondering is something to do with it, then perhaps it is a driver/KDE issue?
Basically, with nvidia's current code, combining opengl + composite + vsync seems to be a bit too much for the driver. It has nothing to do with the card. NVIDIA has been notified of the issue, but since nobody else uses the system in quite the same way that we do, I am not sure what they'll do about it.
-Thom
Quote from: tschak909 on February 17, 2009, 12:25:24 AM
Basically, with nvidia's current code, combining opengl + composite + vsync seems to be a bit too much for the driver. It has nothing to do with the card. NVIDIA has been notified of the issue, but since nobody else uses the system in quite the same way that we do, I am not sure what they'll do about it.
-Thom
So basically, what you are saying, Thom, is that it is the compositing that pushes it over the edge? Or is it the driver-side implementation of OpenGL? I guess that the games I have tried don't have issues because they bypass OpenGL using SDL instead? Is there a bug report number for this with nVidia? Thanks!
does anyone use UI2 w+alpha? Do you just live with the tearing, or are have most people just resorted to UI2 regular? As nice is the +alpha is, I just cant stand the tearing. I have tried the wiki and it didnt help much, maybe bc I use PCI video cards.
I have seen some people say they have it working and without tearing, but I cannot see how this could possibly be given that everybody else's experience is identical with tearing!
Quote from: krys on February 17, 2009, 05:59:43 AM
does anyone use UI2 w+alpha? Do you just live with the tearing, or are have most people just resorted to UI2 regular? As nice is the +alpha is, I just cant stand the tearing. I have tried the wiki and it didnt help much, maybe bc I use PCI video cards.
We never use UI2 + Alphablending because of the tearing primarily but also because we have had consistent feedback from customers that they find it visually confusing in everyday use.
All the best
Andrew
:)
I was using UI2+alpha for my living room MD. I did this because when I would use UI2-masked, the on-screen menu response was sluggish at best. I am not sure why this was happening to me, as it is an NVIDIA 7200GS card, but it was also happening to me on my man cave MD, which had a Nvidia 6200 card in it. I just dealt with the "Z" in the top of the screen.
However after seeing the post about getting better picture quality from PVR150 tuner cards - http://wiki.linuxmce.org/index.php/Pvr-150_picture_quality_guide (http://wiki.linuxmce.org/index.php/Pvr-150_picture_quality_guide) and disabling de-interlacing, and enabling opengl from the wiki entry, my UI2-masked is smooth as silk.
I intend on testing this now with UI2-alpha, to see if it helps resolve the issue. If it does, then I just need to find a way to have the mythtv settings stick on a router reload, and then everything will be perfect.
Regards,
Seth
I've also noticed that UI2+alpha is more responsive and clears out very quickly once I've pressed a button. In UI2+masking, calling up the gyro menu sometimes begins slowly and leaves black boxes. I've just learned to live with it, but alpha blending eliminates that for me.
Quote from: Afkpuz on February 17, 2009, 06:48:53 PM
I've also noticed that UI2+alpha is more responsive and clears out very quickly once I've pressed a button. In UI2+masking, calling up the gyro menu sometimes begins slowly and leaves black boxes. I've just learned to live with it, but alpha blending eliminates that for me.
Hmmm... dont see that here and we never use UI2 + Alpha.
Andrew
Quote from: Afkpuz on February 17, 2009, 06:48:53 PM
I've also noticed that UI2+alpha is more responsive and clears out very quickly once I've pressed a button. In UI2+masking, calling up the gyro menu sometimes begins slowly and leaves black boxes. I've just learned to live with it, but alpha blending eliminates that for me.
well that is almost exactly what I was experiencing. Using the steps from the wiki, specifically:
Disable De-Interlacing
Change method to libmpeg2
Enable OpenGL
Fixed that issue immediately. My UI2-masked is ultra smooth and responsive now.
Just have to figure out how to get the changes to stick on a router reload. But go ahead and try it. Just the first parts. I have not modified the recorded tv parts of it, just the playback. Makes it super nice.
Just give it a try ;)
Regards,
Seth
Quote from: Afkpuz on February 17, 2009, 06:48:53 PM
I've also noticed that UI2+alpha is more responsive and clears out very quickly once I've pressed a button. In UI2+masking, calling up the gyro menu sometimes begins slowly and leaves black boxes. I've just learned to live with it, but alpha blending eliminates that for me.
Mine does this also, especially in MythTV, sometimes there is a 5-10 delay when calling on the menu, and black boxes intermittently I believe after you make your selection.
Quote from: krys on February 17, 2009, 07:50:57 PM
Quote from: Afkpuz on February 17, 2009, 06:48:53 PM
I've also noticed that UI2+alpha is more responsive and clears out very quickly once I've pressed a button. In UI2+masking, calling up the gyro menu sometimes begins slowly and leaves black boxes. I've just learned to live with it, but alpha blending eliminates that for me.
Mine does this also, especially in MythTV, sometimes there is a 5-10 delay when calling on the menu, and black boxes intermittently I believe after you make your selection.
We never use Myth...maybe its some kind Myth specific issue... Do you see these artifacts outside Myth? ie just with the pss & UI2 menu bar on screen?
All the best
Andrew
I only see it with mythtv. Playing back videos, or mythtv recordings from the video menu are usually liquid flawless.
However, by disabling De-Interlacing, changing the preferred method from "standard" to "libmpeg2", and enabling opengl the issue goes away instantly, at least until you reload the router.
Regards,
Seth
We don't use Myth (me, Andrew) so its looking like it maybe a Myth issue....
I'm running a Fiire Server with Fiire Invisible MD's. I ran one MD on UI2 +alpha for about 2 weeks but changed it to overlay for astetic reasons (just didn't care for the UI). My other two MD's have always run on UI2 +overlay. In all cases I've never had tearing, black boxes, sluggish response or any of the issues noted here. Not sure of the exact chipset/specs of the Invisible but maybe there is something there that can give a hint to a solution, they seem to have figured it out or I'm just extremely lucky.
I know that many people have less than stellar things to say about Fiire, but for me they've been perfect. Timely delivery, good support when needed, accurate order, etc. Again, maybe I'm just extremely lucky.
It uses the 7050 nVidia chip, which is exactly the same as my core's on board chipset... and I always had the tearing problem with it no matter what options I chose... I now use a 7300GT card for other reasons, but this too had the tearing problem in exactly the same way even though general rendering on this chipset should be like 8 times faster.... So either you were VERY lucky some how, or I would like to know why Fiire have not contributed a "fix" back for this!
Is there any possibility at all that the tearing issue is a PCIe problem? Does everybody who is having the problem use a PCIe card? I mention this because I have the dreaded tearing as well, but only with my PICe nVidia card. My old hybrid had an AGP nVidia, and it worked perfectly.
Quote from: purps on February 20, 2009, 09:54:00 AM
Is there any possibility at all that the tearing issue is a PCIe problem? Does everybody who is having the problem use a PCIe card? I mention this because I have the dreaded tearing as well, but only with my PICe nVidia card. My old hybrid had an AGP nVidia, and it worked perfectly.
Dont think so... We see it on all combinations of onboard & PCIe GPUs.... Collins motherboard uses on an onboard 7050 GPU... thats not PCIe
Andrew
The other thing is, blitting and frame buffer switching happens entirely on the GPU side of the PCI interface, the only thing that crosses the PCI boundary during this process is the command to trigger the blit or swap, not the actual data itself. And when this is sync'd to the vblank retrace period, it is triggered within the GPU anyway, so nothing is crossing that interface. (incidentally, Andrew, of course you are right that the 7050 is onboard, but it is still interfacing using the PCI bus EDIT: with the on board 7050, the frame buffer being in RAM means that this data does cross the PCI interface, so potentially could make it worse!)
But thinking this through, and in light of Thom's comments, makes me think that perhaps what is happening is the entire process is being sync'd to start at the beginning of the retrace period (when the notional "beam" is at the bottom-right of the screen - actually at the bottom right of the bottom vertical porch), but the process involves compositing before it can flip the screen buffers. And perhaps this composite takes so long that the retrace has completed and is already part way into tracing out the next frame... hence a tear occurs.
Logically, you would expect the composite to occur after video frame texture is drawn, but before the request for a buffer flip, and thus before the vblank sync which would mean that only a flip is needed between back v porch and front v porch, which takes no time whatsoever. This would also explain to some extent why the tear line is always in the same place, no matter how complex or not the graphics/video at the time... hmm.. just a thought. Perhaps the nVidia coding with the driver has the composite and vsync in the wrong order?
Quote from: colinjones on February 20, 2009, 11:55:39 AM
Snip.....(incidentally, Andrew, of course you are right that the 7050 is onboard, but it is still interfacing using the PCI bus EDIT: with the on board 7050, the frame buffer being in RAM means that this data does cross the PCI interface, so potentially could make it worse!)
...Snip....
Well the above is certainly true... but of course the reality is that tearing in UI2 + Alpha in our tests is equally bad using either on-board GPU's or PCIe based ones. So this may be a factor... but its impact the same for both types of GPU.
Quote
Logically, you would expect the composite to occur after video frame texture is drawn, but before the request for a buffer flip, and thus before the vblank sync which would mean that only a flip is needed between back v porch and front v porch, which takes no time whatsoever. This would also explain to some extent why the tear line is always in the same place, no matter how complex or not the graphics/video at the time... hmm.. just a thought. Perhaps the nVidia coding with the driver has the composite and vsync in the wrong order?
Hmmm... well the above could very well be the case. Interesting hypothesis indeed!!
All the best
Andrew
Quote
Logically, you would expect the composite to occur after video frame texture is drawn, but before the request for a buffer flip, and thus before the vblank sync which would mean that only a flip is needed between back v porch and front v porch, which takes no time whatsoever. This would also explain to some extent why the tear line is always in the same place, no matter how complex or not the graphics/video at the time... hmm.. just a thought. Perhaps the nVidia coding with the driver has the composite and vsync in the wrong order?
Hmmm... well the above could very well be the case. Interesting hypothesis indeed!!
All the best
Andrew
[/quote]
Pure guess work tho!
Quote from: seth on February 17, 2009, 07:24:21 PM
Quote from: Afkpuz on February 17, 2009, 06:48:53 PM
I've also noticed that UI2+alpha is more responsive and clears out very quickly once I've pressed a button. In UI2+masking, calling up the gyro menu sometimes begins slowly and leaves black boxes. I've just learned to live with it, but alpha blending eliminates that for me.
well that is almost exactly what I was experiencing. Using the steps from the wiki, specifically:
Disable De-Interlacing
Change method to libmpeg2
Enable OpenGL
Fixed that issue immediately. My UI2-masked is ultra smooth and responsive now.
Just have to figure out how to get the changes to stick on a router reload. But go ahead and try it. Just the first parts. I have not modified the recorded tv parts of it, just the playback. Makes it super nice.
Just give it a try ;)
Regards,
Seth
So, is there a specific wiki page describing this process, or should I look for each individually?
Hi,
Quote from: colinjones on February 17, 2009, 09:19:29 PM
It uses the 7050 nVidia chip, which is exactly the same as my core's on board chipset... and I always had the tearing problem with it no matter what options I chose... I now use a 7300GT card for other reasons, but this too had the tearing problem in exactly the same way even though general rendering on this chipset should be like 8 times faster.... So either you were VERY lucky some how, or I would like to know why Fiire have not contributed a "fix" back for this!
A bit OT, but is this only when using UI2 + Alpha or also with the basic UI2 interface ?
I am asking because I have got tearings when playing DVD's with the basic UI2 interface.
EDIT: I am using 1080p resolution.
Greetings
Viking
Tearing is a general issue as well, often related not vsync'ing for instance. With Alphablending generally you will always get it (a very small number of people say they haven't but generally...), and it seems always to be in the same spot on the screen, which is a little unusual... But with non-alpha modes you can get it as well, just like any other graphic animation task.... that is usually just that the video isn't vsync'd, and with that the tear will usually move around the screen a bit as the timing isn't the same with each frame. You can fix that by using the nvidia-settings tool... take a look on the wiki for more instructions...
Hi colin,
I already search a lot and also tried turning off vsync as shown in http://wiki.linuxmce.org/index.php/Nvidia_Card_Tweaks_For_Better_MythTV_and_UI_Performance
But that only made it worse, so I turned it on again.
Is it a difference that I am using an European TV with 100hz ? (did buy the 100hz version of the TV, so that I would have the option of turning the motion-improvement feature on or off. But that did no difference here)
I am at the moment testing DVD playback with SURFSUP.
At the beginning there is the zoom out and down of the columbia woman with the torch it stutters a bit. And then at 1:24 there is a couple of pannings over some "documents" where I get some tearing and stuttering :(
I am also getting stuttering playback on VDR recordings. On camera movements it regularily stutters.
Greetings
Viking
Viking
Turning ON vsync is what you need to do for dealing with tearing issue, not off. That article is about performance issues (and I don't agree with it anyway).
European TV is not 100Hz frame rate, it is 25Hz or sometimes 50Hz (for progressive scan). Your TV running at 100Hz could potentially cause a stobing effect (unlikely) see if you can switch it back to 50/25Hz temporarily to be sure.
Certainly the mismatch between the frame rate that the video source is at, and the refresh rate of the mode you are using on the graphics chipset could cause judder (but not normally tearing, if vsync is turned on). You should check what refresh rate the screen mode is using - probably 60Hz, and see if you can modify a modeline to pick a 50Hz screen mode. Also, trying in less than 1080p to start with might be useful. Say 720p - these actions will reduce the workload on your graphics chipset and reduce the possibility of the strobing effect, both of which will have a positive effect on judder, but again, not usually on tearing.
What graphics chipset are you using, and can you confirm that the xorg.conf file states the nvidia driver (not nv or vesa) is being used. Review the Xorg.0.log file to confirm that there are no errors when it tries to select this driver.
But first up be clear on the effect you are getting - juddering/jerking is when the pans/zooms or fast moving objects do not appear to move smoothly, as if some frames were dropped. Tearing is completely different - you will get a horizontal line somewhere on the screen, during motion (particularly horizontal) where an image appears to be sheared sideways, so the image above that line is somewhat to the left or right of the remaining part of the same image below that line. Not by a long way, but enough to be noticeable. If your video is correctly vsync'd this should not happen at all, unless you are running in alphablended mode.
Finally, from the KDE desktop, run glxgears and expand the window to be large, then watch the output in the console you ran the tool from and note how many frames per second it is able to achieve. Without vsync, a decent card should get several hundreds or even thousands of fps, with vsync it should be exactly 50 or 60 fps. When vsync is on, also note whether there is tearing on the animated gears.