Let me hope I'm not blogging too early, but I may have just solved something that has been driving me nuts!
I could not work out why Google refused to find my website, save for the PDF files.
I had everything in place I should, I thought -- robot.txt, sitemap, etc, etc. The site views fine with every browser I have on my PC.
But check it out with google webmaster and it says, on every page, "network unreachable".
And now I think I know why.
Like every good website, I'm tracking visitors, and since this was my first bit of development in ASP.NET, I added my own visit-recording routine. It picks off from the visitor's broswer a number of variables, including the UserLanguages collection. These are the default languages that you like to browse in, and was giving me a rough and ready guide to country of origin.
Except the googlebot browser is different. Either it or the object that .NET uses to represent it lacks this collection ... and in .NET, if you try to access something that isn't there, you get an error instead of your page.
Of course, I should have asked the question on the google forum ages ago, but you know, there are just some things you don't get around to! Well, I did today, and a kind respondent pointed me in the direction of a site (http://www.rexswain.com/httpview.html) that shows precisely what was going on from a web-botty perspective.
Dumb developer, I think, is really to blame. Hopefully, problem fixed. Fingers x-ed as I wait for google to moozy over to n10net for a new look. What a patient botty to keep putting up with my incompetence!