Was working through some issues I was having with storage groups only to discover that i ended up fixing a known issue (I really need to read the release notes more often), but at least I learned alot about how the storage groups work. That said I do have some suggestions to make or at least wish to start some discussion on them while I implement them in the scripts.
In response to the following:
"One last change is in the "Default" storage group - it no longer points to the drive with most space. This was changed because eventually it would throw mythweb for a loop once the paths changed. I will look into a better way to defaultly record to the drive with most space, most likely using recording profiles."
It is my understanding that MythTV will always choose the device with the most amount of free space, given that all other weights are equal, and that all dirs are in the same storage group. thus the easy solution for the above problem is to simply put all public/tv_shows/[device] dirs into the "default" storage group. I have tested this out on my setup and it works perfectly.
This information was gleaned from http://www.mythtv.org/wiki/Storage_Groups#Storage_directories
It would probably also make sense to add the same set of dirs to the LiveTV storage group so that Myth can manage them properly as well.
One additional change I made to my system was to give the system drive an artifically high weight so that it will only ever be used if everything else is full, but this is just because i have a very small system drive in my test rig (17 gig). Its a very easy thing to do as it is just a single line that lives in mythconverg.settings and would be exactly the same for every install so could be included in the install if this is something that the average user would want.
Overall I am very impressed with the way MythTV has implemented these storage groups, it is way more flexible than I initially thought possible and the weighting system is very well designed.
I would like to see some discussion on this from developers and users alike.